From shake.chen at gmail.com Sat Feb 1 19:35:06 2014 From: shake.chen at gmail.com (Shake Chen) Date: Sun, 2 Feb 2014 03:35:06 +0800 Subject: [Rdo-list] lack of iptables rules for neutron glance cinder Message-ID: Hi Now I try to deplay Havana OVS+GRE in centos6.5. let neutron, glance and cinder in single server. # rpm -qa | grep openstack-packstack openstack-packstack-2013.2.1-0.29.dev956.el6.noarch Horizon in control node. 172.18.1.12 is control node IP After finish install , I can not login the Dashboard. 1: login to Neutron node, add iptables rule, let control node can access Neutron -A INPUT -s 172.18.1.12/32 -p tcp -m multiport --dports 9696,67,68 -m comment --comment "001 neutron incoming 172.18.1.12" -j ACCEPT restart httpd and ok ,successful login. 2? after login Dashboard, I found the glance and cinder have same problem. so I have do the same job, ssh to glance node and cinder node , add the iptables rules , restart iptables, the working. -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshefi at redhat.com Sun Feb 2 15:43:49 2014 From: tshefi at redhat.com (Tzach Shefi) Date: Sun, 2 Feb 2014 10:43:49 -0500 (EST) Subject: [Rdo-list] Glance does not save properties of images In-Reply-To: <24471429.2142.1391097660295.JavaMail.Daniel@Daniel-PC> References: <13085627.1386.1390837182915.JavaMail.Daniel@Daniel-PC> <20140127161555.GA7651@redhat.com> <31213435.1409.1390839949745.JavaMail.Daniel@Daniel-PC> <20140128083727.GA13030@redhat.com> <8751501.1837.1391005828950.JavaMail.Daniel@Daniel-PC> <699972937.6658373.1391084578489.JavaMail.root@redhat.com> <24471429.2142.1391097660295.JavaMail.Daniel@Daniel-PC> Message-ID: <504179335.7393210.1391355829082.JavaMail.root@redhat.com> Unfortunately don't have other ideas, I'll try a setup with Rabbit just to make sure this isn't the cause. A long shoot but i'll ask any way the user that set is_public param, has admin permissions right? http://docs.openstack.org/developer/glance/glanceapi.html "Use of the is_public parameter is restricted to admin users. For all other users it will be ignored." Regards, Tzach ----- Original Message ----- From: "Daniel Speichert" To: "Tzach Shefi" Cc: "Flavio Percoco" , rdo-list at redhat.com Sent: Thursday, January 30, 2014 6:01:26 PM Subject: Re: [Rdo-list] Glance does not save properties of images Yes, Ceph is working well as backend (images are uploaded there successfully and retrieved as well). The same problem occurs also when the image is kept locally (default_store = file). We have checked multiple images and the problem is the same. Interestingly, we had the same setup before (we reinstalled last week) and it was working fine. That makes me think that something might be different in the package. We use rabbit and it seems to be working fine - ceilometer is getting these notifications. Do you have any other ideaas? I'm running the latest package version form RDO. Thanks, Daniel Speichert ----- Original Message ----- > From: "Tzach Shefi" > To: "Daniel Speichert" > Cc: "Flavio Percoco" , rdo-list at redhat.com > Sent: Thursday, January 30, 2014 7:22:58 AM > Subject: Re: [Rdo-list] Glance does not save properties of images > > Hello Daniel, > > On your posted glance-api.con noticed default_store = rbd, assuming > Glance back end is CEPH (correct?). > > Just installed AIO RDO Havana on RHEL 6.5. > Is_public updated successfully via Horizon\CLI, image shows up as > public on Horizon\image-show. > > > I'm assuming (the obvious..) that you checked this on more than just > that one image, eliminates a single bad image as source of this > problem. > BTW what type\source of image was used\checked? > > Another difference in setups mine notifier_strategy =QPID your's uses > rabbit. > > Regards, > Tzach > > ----- Original Message ----- > From: "Daniel Speichert" > To: "Flavio Percoco" > Cc: rdo-list at redhat.com > Sent: Wednesday, January 29, 2014 4:30:56 PM > Subject: [Rdo-list] Glance does not save properties of images > > ----- Original Message ----- > > From: "Flavio Percoco" > > To: "Daniel Speichert" > > Cc: "Lars Kellogg-Stedman" , rdo-list at redhat.com > > Sent: Tuesday, January 28, 2014 3:37:27 AM > > Subject: Re: [Rdo-list] Glance does not save properties of images > > > > > > Could you please share the commands your using? > > > > - How are you adding the properties? > > - How are you reading the propoerties? > > > > You're sharing your registry.log configurations which means you've > > configured the registry. Could you share how you did that? > > > > Thanks, > > flaper > > > > These are required properties when adding any Glance image > (disk_format and container_format). I was trying to set is_public > through 'glance image-edit' and Horizon. The change seems to be > accepted (no error) but on subsequent image-show it's just not > there. > > Registry and API are set up almost by defaults: > glance-registry.conf: http://pastebin.com/M7Ybjnf9 > glance-api.conf: http://pastebin.com/5sPXr3mG > > Regards, > Daniel Speichert > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From daniel at speichert.pl Mon Feb 3 04:59:01 2014 From: daniel at speichert.pl (Daniel Speichert) Date: Mon, 3 Feb 2014 05:59:01 +0100 (CET) Subject: [Rdo-list] Glance does not save properties of images In-Reply-To: <504179335.7393210.1391355829082.JavaMail.root@redhat.com> References: <13085627.1386.1390837182915.JavaMail.Daniel@Daniel-PC> <20140127161555.GA7651@redhat.com> <31213435.1409.1390839949745.JavaMail.Daniel@Daniel-PC> <20140128083727.GA13030@redhat.com> <8751501.1837.1391005828950.JavaMail.Daniel@Daniel-PC> <699972937.6658373.1391084578489.JavaMail.root@redhat.com> <24471429.2142.1391097660295.JavaMail.Daniel@Daniel-PC> <504179335.7393210.1391355829082.JavaMail.root@redhat.com> Message-ID: <13870245.2743.1391403509394.JavaMail.Daniel@Daniel-PC> Yes, I tested that with an admin user. The bigger problem that is_public is that required disk_format and container_format are lost too. Regards, Daniel Speichert ----- Original Message ----- > From: "Tzach Shefi" > To: "Daniel Speichert" > Cc: "Flavio Percoco" , rdo-list at redhat.com > Sent: Sunday, February 2, 2014 10:43:49 AM > Subject: Re: [Rdo-list] Glance does not save properties of images > > Unfortunately don't have other ideas, I'll try a setup with Rabbit > just to make sure this isn't the cause. > > A long shoot but i'll ask any way the user that set is_public param, > has admin permissions right? > http://docs.openstack.org/developer/glance/glanceapi.html > > "Use of the is_public parameter is restricted to admin users. For all > other users it will be ignored." > > Regards, > Tzach > > > ----- Original Message ----- > From: "Daniel Speichert" > To: "Tzach Shefi" > Cc: "Flavio Percoco" , rdo-list at redhat.com > Sent: Thursday, January 30, 2014 6:01:26 PM > Subject: Re: [Rdo-list] Glance does not save properties of images > > Yes, Ceph is working well as backend (images are uploaded there > successfully and retrieved as well). > The same problem occurs also when the image is kept locally > (default_store = file). > > We have checked multiple images and the problem is the same. > Interestingly, we had the same setup before (we reinstalled last > week) and it was working fine. That makes me think that something > might be different in the package. > > We use rabbit and it seems to be working fine - ceilometer is getting > these notifications. > > Do you have any other ideaas? I'm running the latest package version > form RDO. > > Thanks, > Daniel Speichert > > ----- Original Message ----- > > From: "Tzach Shefi" > > To: "Daniel Speichert" > > Cc: "Flavio Percoco" , rdo-list at redhat.com > > Sent: Thursday, January 30, 2014 7:22:58 AM > > Subject: Re: [Rdo-list] Glance does not save properties of images > > > > Hello Daniel, > > > > On your posted glance-api.con noticed default_store = rbd, assuming > > Glance back end is CEPH (correct?). > > > > Just installed AIO RDO Havana on RHEL 6.5. > > Is_public updated successfully via Horizon\CLI, image shows up as > > public on Horizon\image-show. > > > > > > I'm assuming (the obvious..) that you checked this on more than > > just > > that one image, eliminates a single bad image as source of this > > problem. > > BTW what type\source of image was used\checked? > > > > Another difference in setups mine notifier_strategy =QPID your's > > uses > > rabbit. > > > > Regards, > > Tzach > > > > ----- Original Message ----- > > From: "Daniel Speichert" > > To: "Flavio Percoco" > > Cc: rdo-list at redhat.com > > Sent: Wednesday, January 29, 2014 4:30:56 PM > > Subject: [Rdo-list] Glance does not save properties of images > > > > ----- Original Message ----- > > > From: "Flavio Percoco" > > > To: "Daniel Speichert" > > > Cc: "Lars Kellogg-Stedman" , rdo-list at redhat.com > > > Sent: Tuesday, January 28, 2014 3:37:27 AM > > > Subject: Re: [Rdo-list] Glance does not save properties of images > > > > > > > > > Could you please share the commands your using? > > > > > > - How are you adding the properties? > > > - How are you reading the propoerties? > > > > > > You're sharing your registry.log configurations which means > > > you've > > > configured the registry. Could you share how you did that? > > > > > > Thanks, > > > flaper > > > > > > > These are required properties when adding any Glance image > > (disk_format and container_format). I was trying to set is_public > > through 'glance image-edit' and Horizon. The change seems to be > > accepted (no error) but on subsequent image-show it's just not > > there. > > > > Registry and API are set up almost by defaults: > > glance-registry.conf: http://pastebin.com/M7Ybjnf9 > > glance-api.conf: http://pastebin.com/5sPXr3mG > > > > Regards, > > Daniel Speichert > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > From flavio at redhat.com Mon Feb 3 07:46:17 2014 From: flavio at redhat.com (Flavio Percoco) Date: Mon, 3 Feb 2014 08:46:17 +0100 Subject: [Rdo-list] Glance does not save properties of images In-Reply-To: <13870245.2743.1391403509394.JavaMail.Daniel@Daniel-PC> References: <13085627.1386.1390837182915.JavaMail.Daniel@Daniel-PC> <20140127161555.GA7651@redhat.com> <31213435.1409.1390839949745.JavaMail.Daniel@Daniel-PC> <20140128083727.GA13030@redhat.com> <8751501.1837.1391005828950.JavaMail.Daniel@Daniel-PC> <699972937.6658373.1391084578489.JavaMail.root@redhat.com> <24471429.2142.1391097660295.JavaMail.Daniel@Daniel-PC> <504179335.7393210.1391355829082.JavaMail.root@redhat.com> <13870245.2743.1391403509394.JavaMail.Daniel@Daniel-PC> Message-ID: <20140203074617.GA23799@redhat.com> On 03/02/14 05:59 +0100, Daniel Speichert wrote: >Yes, I tested that with an admin user. The bigger problem that is_public is that required disk_format and container_format are lost too. This sounds like a bigger problem, though. What do you mean with `disk_format` and `container_format` are lost? for what command? Could you please share the exact command you're running? Notice that some of these parameters are available at creation time but not as part of updates. That said, I recall you said it used to work and stopped working after you updated / reinstalled your system. Thanks! flaper > >Regards, >Daniel Speichert > >----- Original Message ----- >> From: "Tzach Shefi" >> To: "Daniel Speichert" >> Cc: "Flavio Percoco" , rdo-list at redhat.com >> Sent: Sunday, February 2, 2014 10:43:49 AM >> Subject: Re: [Rdo-list] Glance does not save properties of images >> >> Unfortunately don't have other ideas, I'll try a setup with Rabbit >> just to make sure this isn't the cause. >> >> A long shoot but i'll ask any way the user that set is_public param, >> has admin permissions right? >> http://docs.openstack.org/developer/glance/glanceapi.html >> >> "Use of the is_public parameter is restricted to admin users. For all >> other users it will be ignored." >> >> Regards, >> Tzach >> >> >> ----- Original Message ----- >> From: "Daniel Speichert" >> To: "Tzach Shefi" >> Cc: "Flavio Percoco" , rdo-list at redhat.com >> Sent: Thursday, January 30, 2014 6:01:26 PM >> Subject: Re: [Rdo-list] Glance does not save properties of images >> >> Yes, Ceph is working well as backend (images are uploaded there >> successfully and retrieved as well). >> The same problem occurs also when the image is kept locally >> (default_store = file). >> >> We have checked multiple images and the problem is the same. >> Interestingly, we had the same setup before (we reinstalled last >> week) and it was working fine. That makes me think that something >> might be different in the package. >> >> We use rabbit and it seems to be working fine - ceilometer is getting >> these notifications. >> >> Do you have any other ideaas? I'm running the latest package version >> form RDO. >> >> Thanks, >> Daniel Speichert >> >> ----- Original Message ----- >> > From: "Tzach Shefi" >> > To: "Daniel Speichert" >> > Cc: "Flavio Percoco" , rdo-list at redhat.com >> > Sent: Thursday, January 30, 2014 7:22:58 AM >> > Subject: Re: [Rdo-list] Glance does not save properties of images >> > >> > Hello Daniel, >> > >> > On your posted glance-api.con noticed default_store = rbd, assuming >> > Glance back end is CEPH (correct?). >> > >> > Just installed AIO RDO Havana on RHEL 6.5. >> > Is_public updated successfully via Horizon\CLI, image shows up as >> > public on Horizon\image-show. >> > >> > >> > I'm assuming (the obvious..) that you checked this on more than >> > just >> > that one image, eliminates a single bad image as source of this >> > problem. >> > BTW what type\source of image was used\checked? >> > >> > Another difference in setups mine notifier_strategy =QPID your's >> > uses >> > rabbit. >> > >> > Regards, >> > Tzach >> > >> > ----- Original Message ----- >> > From: "Daniel Speichert" >> > To: "Flavio Percoco" >> > Cc: rdo-list at redhat.com >> > Sent: Wednesday, January 29, 2014 4:30:56 PM >> > Subject: [Rdo-list] Glance does not save properties of images >> > >> > ----- Original Message ----- >> > > From: "Flavio Percoco" >> > > To: "Daniel Speichert" >> > > Cc: "Lars Kellogg-Stedman" , rdo-list at redhat.com >> > > Sent: Tuesday, January 28, 2014 3:37:27 AM >> > > Subject: Re: [Rdo-list] Glance does not save properties of images >> > > >> > > >> > > Could you please share the commands your using? >> > > >> > > - How are you adding the properties? >> > > - How are you reading the propoerties? >> > > >> > > You're sharing your registry.log configurations which means >> > > you've >> > > configured the registry. Could you share how you did that? >> > > >> > > Thanks, >> > > flaper >> > > >> > >> > These are required properties when adding any Glance image >> > (disk_format and container_format). I was trying to set is_public >> > through 'glance image-edit' and Horizon. The change seems to be >> > accepted (no error) but on subsequent image-show it's just not >> > there. >> > >> > Registry and API are set up almost by defaults: >> > glance-registry.conf: http://pastebin.com/M7Ybjnf9 >> > glance-api.conf: http://pastebin.com/5sPXr3mG >> > >> > Regards, >> > Daniel Speichert >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From daniel at speichert.pl Mon Feb 3 15:11:45 2014 From: daniel at speichert.pl (Daniel Speichert) Date: Mon, 3 Feb 2014 16:11:45 +0100 (CET) Subject: [Rdo-list] Glance does not save properties of images In-Reply-To: <20140203074617.GA23799@redhat.com> References: <13085627.1386.1390837182915.JavaMail.Daniel@Daniel-PC> <20140128083727.GA13030@redhat.com> <8751501.1837.1391005828950.JavaMail.Daniel@Daniel-PC> <699972937.6658373.1391084578489.JavaMail.root@redhat.com> <24471429.2142.1391097660295.JavaMail.Daniel@Daniel-PC> <504179335.7393210.1391355829082.JavaMail.root@redhat.com> <13870245.2743.1391403509394.JavaMail.Daniel@Daniel-PC> <20140203074617.GA23799@redhat.com> Message-ID: <29276922.2867.1391440280509.JavaMail.Daniel@Daniel-PC> The exact command here is not important because we primarily discovered this issue through Horizon. disk_format and (I think) container_format MUST be given when image is created, otherwise Glance API server would not accept them if you omit it. I tried adding image without disk_format and it is rejected. When I add an image with disk_format specified, image is correctly downloaded and saved but disk_format property is not saved to the database. Same with container_format and is_public. The only property that is saved is the name of the image. Therefore I think the problem is not related to any sort of client (command) but the server itself, no client is able to omit disk_format property. I ran Glance Registry in sqlachemy debug mode and pasted SQL queries before - they lack values for these properties. We started to experience this problem 2 weeks ago after we reinstalled everything again according to Ansible recipe (so in fact all the configuration is the same). If there was a package update in that time it seems like the only reason for anything to change. Regards, Daniel Speichert ----- Original Message ----- > From: "Flavio Percoco" > To: "Daniel Speichert" > Cc: "Tzach Shefi" , rdo-list at redhat.com > Sent: Monday, February 3, 2014 2:46:17 AM > Subject: Re: [Rdo-list] Glance does not save properties of images > > On 03/02/14 05:59 +0100, Daniel Speichert wrote: > >Yes, I tested that with an admin user. The bigger problem that > >is_public is that required disk_format and container_format are > >lost too. > > This sounds like a bigger problem, though. > > What do you mean with `disk_format` and `container_format` are lost? > for what command? Could you please share the exact command you're > running? > > Notice that some of these parameters are available at creation time > but not as part of updates. That said, I recall you said it used to > work and stopped working after you updated / reinstalled your system. > > Thanks! > flaper > > > > >Regards, > >Daniel Speichert > >The e > >----- Original Message ----- > >> From: "Tzach Shefi" > >> To: "Daniel Speichert" > >> Cc: "Flavio Percoco" , rdo-list at redhat.com > >> Sent: Sunday, February 2, 2014 10:43:49 AM > >> Subject: Re: [Rdo-list] Glance does not save properties of images > >> > >> Unfortunately don't have other ideas, I'll try a setup with Rabbit > >> just to make sure this isn't the cause. > >> > >> A long shoot but i'll ask any way the user that set is_public > >> param, > >> has admin permissions right? > >> http://docs.openstack.org/developer/glance/glanceapi.html > >> > >> "Use of the is_public parameter is restricted to admin users. For > >> all > >> other users it will be ignored." > >> > >> Regards, > >> Tzach > >> > >> > >> ----- Original Message ----- > >> From: "Daniel Speichert" > >> To: "Tzach Shefi" > >> Cc: "Flavio Percoco" , rdo-list at redhat.com > >> Sent: Thursday, January 30, 2014 6:01:26 PM > >> Subject: Re: [Rdo-list] Glance does not save properties of images > >> > >> Yes, Ceph is working well as backend (images are uploaded there > >> successfully and retrieved as well). > >> The same problem occurs also when the image is kept locally > >> (default_store = file). > >> > >> We have checked multiple images and the problem is the same. > >> Interestingly, we had the same setup before (we reinstalled last > >> week) and it was working fine. That makes me think that something > >> might be different in the package. > >> > >> We use rabbit and it seems to be working fine - ceilometer is > >> getting > >> these notifications. > >> > >> Do you have any other ideaas? I'm running the latest package > >> version > >> form RDO. > >> > >> Thanks, > >> Daniel Speichert > >> > >> ----- Original Message ----- > >> > From: "Tzach Shefi" > >> > To: "Daniel Speichert" > >> > Cc: "Flavio Percoco" , rdo-list at redhat.com > >> > Sent: Thursday, January 30, 2014 7:22:58 AM > >> > Subject: Re: [Rdo-list] Glance does not save properties of > >> > images > >> > > >> > Hello Daniel, > >> > > >> > On your posted glance-api.con noticed default_store = rbd, > >> > assuming > >> > Glance back end is CEPH (correct?). > >> > > >> > Just installed AIO RDO Havana on RHEL 6.5. > >> > Is_public updated successfully via Horizon\CLI, image shows up > >> > as > >> > public on Horizon\image-show. > >> > > >> > > >> > I'm assuming (the obvious..) that you checked this on more than > >> > just > >> > that one image, eliminates a single bad image as source of this > >> > problem. > >> > BTW what type\source of image was used\checked? > >> > > >> > Another difference in setups mine notifier_strategy =QPID your's > >> > uses > >> > rabbit. > >> > > >> > Regards, > >> > Tzach > >> > > >> > ----- Original Message ----- > >> > From: "Daniel Speichert" > >> > To: "Flavio Percoco" > >> > Cc: rdo-list at redhat.com > >> > Sent: Wednesday, January 29, 2014 4:30:56 PM > >> > Subject: [Rdo-list] Glance does not save properties of images > >> > > >> > ----- Original Message ----- > >> > > From: "Flavio Percoco" > >> > > To: "Daniel Speichert" > >> > > Cc: "Lars Kellogg-Stedman" , > >> > > rdo-list at redhat.com > >> > > Sent: Tuesday, January 28, 2014 3:37:27 AM > >> > > Subject: Re: [Rdo-list] Glance does not save properties of > >> > > images > >> > > > >> > > > >> > > Could you please share the commands your using? > >> > > > >> > > - How are you adding the properties? > >> > > - How are you reading the propoerties? > >> > > > >> > > You're sharing your registry.log configurations which means > >> > > you've > >> > > configured the registry. Could you share how you did that? > >> > > > >> > > Thanks, > >> > > flaper > >> > > > >> > > >> > These are required properties when adding any Glance image > >> > (disk_format and container_format). I was trying to set > >> > is_public > >> > through 'glance image-edit' and Horizon. The change seems to be > >> > accepted (no error) but on subsequent image-show it's just not > >> > there. > >> > > >> > Registry and API are set up almost by defaults: > >> > glance-registry.conf: http://pastebin.com/M7Ybjnf9 > >> > glance-api.conf: http://pastebin.com/5sPXr3mG > >> > > >> > Regards, > >> > Daniel Speichert > >> > > >> > _______________________________________________ > >> > Rdo-list mailing list > >> > Rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > > -- > @flaper87 > Flavio Percoco > From dkranz at redhat.com Mon Feb 3 15:50:27 2014 From: dkranz at redhat.com (David Kranz) Date: Mon, 03 Feb 2014 10:50:27 -0500 Subject: [Rdo-list] Creating rpms for Tempest Message-ID: <52EFBAC3.7070303@redhat.com> There has been a lot of interest in running tempest against real RDO clusters. Having an rpm would make that a lot easier. There are several issues peculiar to tempest that need to be resolved. 1. The way tempest configures itself and does test discovery depends heavily on the tests being run from a directory containing the tests. 2. Unlike OpenStack python client libraries, tempest changes from release to release in incompatible ways, so you need "havana tempest" to test a havana cluster and an "icehouse tempest" to test icehouse. The tempest group has little interest in changing either of these behaviors. Additionally, it would be desirable if a tempest user could install tempest rpms to test different RDO versions on the same machine. Here is a proposal for how this could work and what the user experience would be. Dealing with these tempest issues suggests that the the tempest code should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user should configure a separate directory for each OpenStack cluster to be tested. Each directory needs to contain: .testr.conf an etc directory containing the tempest.conf and logging.conf files a symlink to the tempest test modules for the appropriate version a copy of the test run scripts that are in the tools directory of tempest To help the user create such directories, there should be a global executable "configure-tempest-directory" that takes an optional version. If multiple versions are present in /var/lib/tempest and no version is specified then the user will be asked which version to configure. User experience: 1. Install tempest rpm: yum install tempest-4.0 2. Run configure-tempest-directory 3. Make changes to tempest.conf to match the cluster being tested (and possibly logging.conf and .testr.conf as well) 4. Run tempest with desired test selection using tools/pretty_tox_serial.sh or tools/ pretty_tox_serial Does any one have any comments/suggestions about this? -David From lars at redhat.com Mon Feb 3 18:13:17 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 3 Feb 2014 13:13:17 -0500 Subject: [Rdo-list] RDO test day Feb. 4-5 - milestone II. In-Reply-To: <1141347357.7715461.1391450158545.JavaMail.root@redhat.com> References: <2084291764.13359013.1391406825300.JavaMail.root@redhat.com> <52EF53F2.7010702@redhat.com> <1434322697.13394821.1391416962819.JavaMail.root@redhat.com> <1295984616.7532940.1391429205592.JavaMail.root@redhat.com> <1141347357.7715461.1391450158545.JavaMail.root@redhat.com> Message-ID: <20140203181317.GA13634@redhat.com> On Mon, Feb 03, 2014 at 12:55:58PM -0500, Tzach Shefi wrote: > Installing on Fedora 20, semi dist' got below error. > Packstack with answer file started working ok than: > [...] > Cannot retrieve metalink for repository: fedora/20/x86_64. Please > verify its path and try again (I'm moving this to rdo-list.) This seems like an error contacting a Fedora mirror. There are occasional hiccups in the mirror network...maybe just try it again and see if it works? If you encounter the error a second time, are you able to install packages manually using "yum install"? -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From miguelangel at ajo.es Mon Feb 3 16:20:30 2014 From: miguelangel at ajo.es (Miguel Angel) Date: Mon, 3 Feb 2014 17:20:30 +0100 Subject: [Rdo-list] Creating rpms for Tempest In-Reply-To: <52EFBAC3.7070303@redhat.com> References: <52EFBAC3.7070303@redhat.com> Message-ID: I agree it's something interesting for the project, I've thought of the same a couple of times already. --- irc: ajo / mangelajo Miguel Angel Ajo Pelayo +34 636 52 25 69 skype: ajoajoajo 2014-02-03 David Kranz : > There has been a lot of interest in running tempest against real RDO > clusters. Having an rpm would make that a lot easier. There are several > issues peculiar to tempest that need to be resolved. > > 1. The way tempest configures itself and does test discovery depends > heavily on the tests being run from a directory containing the tests. > 2. Unlike OpenStack python client libraries, tempest changes from release > to release in incompatible ways, so you need "havana tempest" to test a > havana cluster and an "icehouse tempest" to test icehouse. > > The tempest group has little interest in changing either of these > behaviors. Additionally, it would be desirable if a tempest user could > install tempest rpms to test different RDO versions on the same machine. > Here is a proposal for how this could work and what the user experience > would be. > > Dealing with these tempest issues suggests that the the tempest code > should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user should > configure a separate directory for each OpenStack cluster to be tested. > Each directory needs to contain: > > .testr.conf > an etc directory containing the tempest.conf and logging.conf files > a symlink to the tempest test modules for the appropriate version > a copy of the test run scripts that are in the tools directory of tempest > > To help the user create such directories, there should be a global > executable "configure-tempest-directory" that takes an optional version. If > multiple versions are present in /var/lib/tempest and no version is > specified then the user will be asked which version to configure. > > User experience: > > 1. Install tempest rpm: yum install tempest-4.0 > 2. Run configure-tempest-directory > 3. Make changes to tempest.conf to match the cluster being tested (and > possibly logging.conf and .testr.conf as well) > 4. Run tempest with desired test selection using > tools/pretty_tox_serial.sh or tools/ pretty_tox_serial > > Does any one have any comments/suggestions about this? > > -David > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Mon Feb 3 19:27:26 2014 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 03 Feb 2014 14:27:26 -0500 Subject: [Rdo-list] Creating rpms for Tempest In-Reply-To: <52EFBAC3.7070303@redhat.com> References: <52EFBAC3.7070303@redhat.com> Message-ID: <52EFED9E.4090707@redhat.com> On 02/03/2014 10:50 AM, David Kranz wrote: > There has been a lot of interest in running tempest against real RDO > clusters. Having an rpm would make that a lot easier. There are several > issues peculiar to tempest that need to be resolved. > > 1. The way tempest configures itself and does test discovery depends > heavily on the tests being run from a directory containing the tests. > 2. Unlike OpenStack python client libraries, tempest changes from > release to release in incompatible ways, so you need "havana tempest" to > test a havana cluster and an "icehouse tempest" to test icehouse. > > The tempest group has little interest in changing either of these > behaviors. Additionally, it would be desirable if a tempest user could > install tempest rpms to test different RDO versions on the same machine. > Here is a proposal for how this could work and what the user experience > would be. > > Dealing with these tempest issues suggests that the the tempest code > should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user should > configure a separate directory for each OpenStack cluster to be tested. > Each directory needs to contain: > > .testr.conf > an etc directory containing the tempest.conf and logging.conf files > a symlink to the tempest test modules for the appropriate version > a copy of the test run scripts that are in the tools directory of tempest > > To help the user create such directories, there should be a global > executable "configure-tempest-directory" that takes an optional version. > If multiple versions are present in /var/lib/tempest and no version is > specified then the user will be asked which version to configure. Would it make sense to make the package name itself to have it? This way, you could say install tempest for grizzly and update tempest for havana - e.g.: yum install tempest-grizzly yum update -y tempest-havana ... and the RPMs would not obsolete each other. If we use RPM versioning, "yum update" will blow away testing environments of "older" tempest versions. We could perhaps do it using sub-rpms - which would allow us to add and remove tempest suites as we move along: * ship no files except perhaps license in 'tempest' itself and stand-up environment bits * use subpackages for tempest-grizzly/tempest-havana/tempest-icehouse/... * get a special exception to *not* make tempest-* child RPMs require a fully-versioned tempest base package all the time just in case you wanted to have older tempest for a given release e.g. tempest-2.0.0 tempest-grizzly-1.0.0 tempest-havana-2.0.0 ... could all be installed, and if done right, you could 'yum update -y tempest-grizzly' to 3.0.0 without breaking the tempest-havana-2.0.0 package. Just a thought. > > User experience: > > 1. Install tempest rpm: yum install tempest-4.0 The above would mean: yum install -y tempest-havana yum install -y tempest-grizzly etc. > 2. Run configure-tempest-directory > 3. Make changes to tempest.conf to match the cluster being tested (and > possibly logging.conf and .testr.conf as well) > 4. Run tempest with desired test selection using > tools/pretty_tox_serial.sh or tools/ pretty_tox_serial > > Does any one have any comments/suggestions about this? > -- Lon From mriedem at linux.vnet.ibm.com Mon Feb 3 19:39:26 2014 From: mriedem at linux.vnet.ibm.com (Matt Riedemann) Date: Mon, 03 Feb 2014 13:39:26 -0600 Subject: [Rdo-list] python-backports is arch-specific? Message-ID: <52EFF06E.7040209@linux.vnet.ibm.com> Hey, I'm mainly asking for RHEL 6.5 but also using Fedora 19 and when updating to the latest python-backports-ssl_match_hostname it requires python-backports to avoid a file conflict. However, I noticed that while python-backports-ssl_match_hostname is noarch, python-backports is arch-specific. Looking at the source for backports, it's not really clear to me why that is - can someone explain if it's intentional or a packaging oversight? -- Thanks, Matt Riedemann From kchamart at redhat.com Mon Feb 3 20:30:29 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 4 Feb 2014 02:00:29 +0530 Subject: [Rdo-list] python-backports is arch-specific? In-Reply-To: <20140203202410.GA3815@tesla.redhat.com> References: <52EFF06E.7040209@linux.vnet.ibm.com> <20140203202410.GA3815@tesla.redhat.com> Message-ID: <20140203203029.GB3815@tesla.redhat.com> [Adding the list. /me inadvertantly dropped it, sorry.] On Tue, Feb 04, 2014 at 01:54:10AM +0530, Kashyap Chamarthy wrote: > CC'ing Ian Weller, from Fedora packagedb, he appears to be the owner of > it: > > $ pkgdb-cli acl python-backports > Fedora Package Database -- python-backports > Namespace for backported Python features > 0 bugs open (new, assigned, needinfo) > devel Owner: ianweller > [. . .] > > -- > /kashyap > > On Mon, Feb 03, 2014 at 01:39:26PM -0600, Matt Riedemann wrote: > > Hey, I'm mainly asking for RHEL 6.5 but also using Fedora 19 and > > when updating to the latest python-backports-ssl_match_hostname it > > requires python-backports to avoid a file conflict. > > > > However, I noticed that while python-backports-ssl_match_hostname is > > noarch, python-backports is arch-specific. Looking at the source > > for backports, it's not really clear to me why that is - can someone > > explain if it's intentional or a packaging oversight? > > > > -- > > > > Thanks, > > > > Matt Riedemann > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list From dkranz at redhat.com Mon Feb 3 20:55:00 2014 From: dkranz at redhat.com (David Kranz) Date: Mon, 03 Feb 2014 15:55:00 -0500 Subject: [Rdo-list] Creating rpms for Tempest In-Reply-To: <52EFED9E.4090707@redhat.com> References: <52EFBAC3.7070303@redhat.com> <52EFED9E.4090707@redhat.com> Message-ID: <52F00224.2080503@redhat.com> On 02/03/2014 02:27 PM, Lon Hohberger wrote: > On 02/03/2014 10:50 AM, David Kranz wrote: >> There has been a lot of interest in running tempest against real RDO >> clusters. Having an rpm would make that a lot easier. There are several >> issues peculiar to tempest that need to be resolved. >> >> 1. The way tempest configures itself and does test discovery depends >> heavily on the tests being run from a directory containing the tests. >> 2. Unlike OpenStack python client libraries, tempest changes from >> release to release in incompatible ways, so you need "havana tempest" to >> test a havana cluster and an "icehouse tempest" to test icehouse. >> >> The tempest group has little interest in changing either of these >> behaviors. Additionally, it would be desirable if a tempest user could >> install tempest rpms to test different RDO versions on the same machine. >> Here is a proposal for how this could work and what the user experience >> would be. >> >> Dealing with these tempest issues suggests that the the tempest code >> should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user should >> configure a separate directory for each OpenStack cluster to be tested. >> Each directory needs to contain: >> >> .testr.conf >> an etc directory containing the tempest.conf and logging.conf files >> a symlink to the tempest test modules for the appropriate version >> a copy of the test run scripts that are in the tools directory of tempest >> >> To help the user create such directories, there should be a global >> executable "configure-tempest-directory" that takes an optional version. >> If multiple versions are present in /var/lib/tempest and no version is >> specified then the user will be asked which version to configure. > Would it make sense to make the package name itself to have it? > > This way, you could say install tempest for grizzly and update tempest > for havana - e.g.: > > yum install tempest-grizzly > yum update -y tempest-havana > > ... and the RPMs would not obsolete each other. If we use RPM > versioning, "yum update" will blow away testing environments of "older" > tempest versions. > > We could perhaps do it using sub-rpms - which would allow us to add and > remove tempest suites as we move along: > > * ship no files except perhaps license in 'tempest' itself > and stand-up environment bits > * use subpackages for tempest-grizzly/tempest-havana/tempest-icehouse/... > * get a special exception to *not* make tempest-* child > RPMs require a fully-versioned tempest base package all > the time just in case you wanted to have older tempest > for a given release > e.g. tempest-2.0.0 > tempest-grizzly-1.0.0 > tempest-havana-2.0.0 > > ... could all be installed, and if done right, you could > 'yum update -y tempest-grizzly' to 3.0.0 without breaking > the tempest-havana-2.0.0 package. > > Just a thought. > >> User experience: >> >> 1. Install tempest rpm: yum install tempest-4.0 > The above would mean: > > yum install -y tempest-havana > yum install -y tempest-grizzly > > etc. > > >> 2. Run configure-tempest-directory >> 3. Make changes to tempest.conf to match the cluster being tested (and >> possibly logging.conf and .testr.conf as well) >> 4. Run tempest with desired test selection using >> tools/pretty_tox_serial.sh or tools/ pretty_tox_serial >> >> Does any one have any comments/suggestions about this? >> > -- Lon > Lon, I was not clear enough but I was proposing something very similar to your suggestion, just without subpackages which I don't understand well. Now that I understand the rules a little better and that the base package names should not have things that look like version numbers, I propose: package-names: tempest-havana.{version stuff}, tempest-icehouse.{version stuff} And a tempest-base package on which they all depend that creates "/var/lib/tempest" and the configure-tempest-directory script. -David From dkranz at redhat.com Mon Feb 3 22:13:50 2014 From: dkranz at redhat.com (David Kranz) Date: Mon, 03 Feb 2014 17:13:50 -0500 Subject: [Rdo-list] Creating rpms for Tempest In-Reply-To: <52F00224.2080503@redhat.com> References: <52EFBAC3.7070303@redhat.com> <52EFED9E.4090707@redhat.com> <52F00224.2080503@redhat.com> Message-ID: <52F0149E.5070808@redhat.com> On 02/03/2014 03:55 PM, David Kranz wrote: > On 02/03/2014 02:27 PM, Lon Hohberger wrote: >> On 02/03/2014 10:50 AM, David Kranz wrote: >>> There has been a lot of interest in running tempest against real RDO >>> clusters. Having an rpm would make that a lot easier. There are several >>> issues peculiar to tempest that need to be resolved. >>> >>> 1. The way tempest configures itself and does test discovery depends >>> heavily on the tests being run from a directory containing the tests. >>> 2. Unlike OpenStack python client libraries, tempest changes from >>> release to release in incompatible ways, so you need "havana >>> tempest" to >>> test a havana cluster and an "icehouse tempest" to test icehouse. >>> >>> The tempest group has little interest in changing either of these >>> behaviors. Additionally, it would be desirable if a tempest user could >>> install tempest rpms to test different RDO versions on the same >>> machine. >>> Here is a proposal for how this could work and what the user experience >>> would be. >>> >>> Dealing with these tempest issues suggests that the the tempest code >>> should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user >>> should >>> configure a separate directory for each OpenStack cluster to be tested. >>> Each directory needs to contain: >>> >>> .testr.conf >>> an etc directory containing the tempest.conf and logging.conf files >>> a symlink to the tempest test modules for the appropriate version >>> a copy of the test run scripts that are in the tools directory of >>> tempest >>> >>> To help the user create such directories, there should be a global >>> executable "configure-tempest-directory" that takes an optional >>> version. >>> If multiple versions are present in /var/lib/tempest and no version is >>> specified then the user will be asked which version to configure. >> Would it make sense to make the package name itself to have it? >> >> This way, you could say install tempest for grizzly and update tempest >> for havana - e.g.: >> >> yum install tempest-grizzly >> yum update -y tempest-havana >> >> ... and the RPMs would not obsolete each other. If we use RPM >> versioning, "yum update" will blow away testing environments of "older" >> tempest versions. >> >> We could perhaps do it using sub-rpms - which would allow us to add and >> remove tempest suites as we move along: >> >> * ship no files except perhaps license in 'tempest' itself >> and stand-up environment bits >> * use subpackages for >> tempest-grizzly/tempest-havana/tempest-icehouse/... >> * get a special exception to *not* make tempest-* child >> RPMs require a fully-versioned tempest base package all >> the time just in case you wanted to have older tempest >> for a given release >> e.g. tempest-2.0.0 >> tempest-grizzly-1.0.0 >> tempest-havana-2.0.0 >> >> ... could all be installed, and if done right, you could >> 'yum update -y tempest-grizzly' to 3.0.0 without breaking >> the tempest-havana-2.0.0 package. >> >> Just a thought. >> >>> User experience: >>> >>> 1. Install tempest rpm: yum install tempest-4.0 >> The above would mean: >> >> yum install -y tempest-havana >> yum install -y tempest-grizzly >> >> etc. >> >> >>> 2. Run configure-tempest-directory >>> 3. Make changes to tempest.conf to match the cluster being tested (and >>> possibly logging.conf and .testr.conf as well) >>> 4. Run tempest with desired test selection using >>> tools/pretty_tox_serial.sh or tools/ pretty_tox_serial >>> >>> Does any one have any comments/suggestions about this? >>> >> -- Lon >> > Lon, I was not clear enough but I was proposing something very similar > to your suggestion, just without subpackages which I don't understand > well. Now that I understand the rules a little better and that the > base package names should not have things that look like version > numbers, I propose: > > package-names: tempest-havana.{version stuff}, > tempest-icehouse.{version stuff} > > And a tempest-base package on which they all depend that creates > "/var/lib/tempest" and the configure-tempest-directory script. Perhaps this complexity is not even needed. We can just use /var/lib/tempest-{havana,icehouse} and configure-tempest-{havana,icehouse}-directory scripts. The rpms for havana and icehouse are just separate, end of story. > > -David From pbrady at redhat.com Mon Feb 3 23:22:38 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 03 Feb 2014 23:22:38 +0000 Subject: [Rdo-list] FYI: f20 icehouse install fails w/ openstack-dashboard install error In-Reply-To: <1391199790.2775.6.camel@localhost.localdomain> References: <1391199790.2775.6.camel@localhost.localdomain> Message-ID: <52F024BE.7040408@redhat.com> On 01/31/2014 08:23 PM, whayutin wrote: > https://bugzilla.redhat.com/show_bug.cgi?id=1060334 This is surprising since the correct version of troveclient should be available in the stable fedora repositories (since Jan 3rd 2014): https://admin.fedoraproject.org/updates/python-troveclient-1.0.3-1.fc20 Are you using an out of date mirror perhaps? From chris at tylers.info Tue Feb 4 01:09:38 2014 From: chris at tylers.info (Chris Tyler) Date: Mon, 3 Feb 2014 20:09:38 -0500 Subject: [Rdo-list] python-backports is arch-specific? In-Reply-To: <20140203203029.GB3815@tesla.redhat.com> References: <52EFF06E.7040209@linux.vnet.ibm.com> <20140203202410.GA3815@tesla.redhat.com> <20140203203029.GB3815@tesla.redhat.com> Message-ID: It's arch specific because of %{python_sitearch} which can be in the /usr/lib or /usr/lib64 trees. -Chris On Feb 3, 2014 3:32 PM, "Kashyap Chamarthy" wrote: > [Adding the list. /me inadvertantly dropped it, sorry.] > > On Tue, Feb 04, 2014 at 01:54:10AM +0530, Kashyap Chamarthy wrote: > > CC'ing Ian Weller, from Fedora packagedb, he appears to be the owner of > > it: > > > > $ pkgdb-cli acl python-backports > > Fedora Package Database -- python-backports > > Namespace for backported Python features > > 0 bugs open (new, assigned, needinfo) > > devel Owner: ianweller > > [. . .] > > > > -- > > /kashyap > > > > On Mon, Feb 03, 2014 at 01:39:26PM -0600, Matt Riedemann wrote: > > > Hey, I'm mainly asking for RHEL 6.5 but also using Fedora 19 and > > > when updating to the latest python-backports-ssl_match_hostname it > > > requires python-backports to avoid a file conflict. > > > > > > However, I noticed that while python-backports-ssl_match_hostname is > > > noarch, python-backports is arch-specific. Looking at the source > > > for backports, it's not really clear to me why that is - can someone > > > explain if it's intentional or a packaging oversight? > > > > > > -- > > > > > > Thanks, > > > > > > Matt Riedemann > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Feb 4 01:22:28 2014 From: whayutin at redhat.com (whayutin) Date: Mon, 03 Feb 2014 20:22:28 -0500 Subject: [Rdo-list] Creating rpms for Tempest In-Reply-To: <52F0149E.5070808@redhat.com> References: <52EFBAC3.7070303@redhat.com> <52EFED9E.4090707@redhat.com> <52F00224.2080503@redhat.com> <52F0149E.5070808@redhat.com> Message-ID: <1391476948.2578.1.camel@localhost.localdomain> On Mon, 2014-02-03 at 17:13 -0500, David Kranz wrote: > On 02/03/2014 03:55 PM, David Kranz wrote: > > On 02/03/2014 02:27 PM, Lon Hohberger wrote: > >> On 02/03/2014 10:50 AM, David Kranz wrote: > >>> There has been a lot of interest in running tempest against real RDO > >>> clusters. Having an rpm would make that a lot easier. There are several > >>> issues peculiar to tempest that need to be resolved. > >>> > >>> 1. The way tempest configures itself and does test discovery depends > >>> heavily on the tests being run from a directory containing the tests. > >>> 2. Unlike OpenStack python client libraries, tempest changes from > >>> release to release in incompatible ways, so you need "havana > >>> tempest" to > >>> test a havana cluster and an "icehouse tempest" to test icehouse. > >>> > >>> The tempest group has little interest in changing either of these > >>> behaviors. Additionally, it would be desirable if a tempest user could > >>> install tempest rpms to test different RDO versions on the same > >>> machine. > >>> Here is a proposal for how this could work and what the user experience > >>> would be. > >>> > >>> Dealing with these tempest issues suggests that the the tempest code > >>> should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user > >>> should > >>> configure a separate directory for each OpenStack cluster to be tested. > >>> Each directory needs to contain: > >>> > >>> .testr.conf > >>> an etc directory containing the tempest.conf and logging.conf files > >>> a symlink to the tempest test modules for the appropriate version > >>> a copy of the test run scripts that are in the tools directory of > >>> tempest > >>> > >>> To help the user create such directories, there should be a global > >>> executable "configure-tempest-directory" that takes an optional > >>> version. > >>> If multiple versions are present in /var/lib/tempest and no version is > >>> specified then the user will be asked which version to configure. > >> Would it make sense to make the package name itself to have it? > >> > >> This way, you could say install tempest for grizzly and update tempest > >> for havana - e.g.: > >> > >> yum install tempest-grizzly > >> yum update -y tempest-havana > >> > >> ... and the RPMs would not obsolete each other. If we use RPM > >> versioning, "yum update" will blow away testing environments of "older" > >> tempest versions. > >> > >> We could perhaps do it using sub-rpms - which would allow us to add and > >> remove tempest suites as we move along: > >> > >> * ship no files except perhaps license in 'tempest' itself > >> and stand-up environment bits > >> * use subpackages for > >> tempest-grizzly/tempest-havana/tempest-icehouse/... > >> * get a special exception to *not* make tempest-* child > >> RPMs require a fully-versioned tempest base package all > >> the time just in case you wanted to have older tempest > >> for a given release > >> e.g. tempest-2.0.0 > >> tempest-grizzly-1.0.0 > >> tempest-havana-2.0.0 > >> > >> ... could all be installed, and if done right, you could > >> 'yum update -y tempest-grizzly' to 3.0.0 without breaking > >> the tempest-havana-2.0.0 package. > >> > >> Just a thought. > >> > >>> User experience: > >>> > >>> 1. Install tempest rpm: yum install tempest-4.0 > >> The above would mean: > >> > >> yum install -y tempest-havana > >> yum install -y tempest-grizzly > >> > >> etc. > >> > >> > >>> 2. Run configure-tempest-directory > >>> 3. Make changes to tempest.conf to match the cluster being tested (and > >>> possibly logging.conf and .testr.conf as well) > >>> 4. Run tempest with desired test selection using > >>> tools/pretty_tox_serial.sh or tools/ pretty_tox_serial > >>> > >>> Does any one have any comments/suggestions about this? > >>> > >> -- Lon > >> > > Lon, I was not clear enough but I was proposing something very similar > > to your suggestion, just without subpackages which I don't understand > > well. Now that I understand the rules a little better and that the > > base package names should not have things that look like version > > numbers, I propose: > > > > package-names: tempest-havana.{version stuff}, > > tempest-icehouse.{version stuff} > > > > And a tempest-base package on which they all depend that creates > > "/var/lib/tempest" and the configure-tempest-directory script. > Perhaps this complexity is not even needed. We can just use > /var/lib/tempest-{havana,icehouse} and > configure-tempest-{havana,icehouse}-directory scripts. The rpms for > havana and icehouse are just separate, end of story. > > > > -David Agreed.. It seems it would be easier to configure and debug if each release was installed into its own directory. +1 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From kchamart at redhat.com Tue Feb 4 04:11:02 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 4 Feb 2014 09:41:02 +0530 Subject: [Rdo-list] RDO test day Feb. 4-5 - milestone II. In-Reply-To: <20140203181317.GA13634@redhat.com> References: <2084291764.13359013.1391406825300.JavaMail.root@redhat.com> <52EF53F2.7010702@redhat.com> <1434322697.13394821.1391416962819.JavaMail.root@redhat.com> <1295984616.7532940.1391429205592.JavaMail.root@redhat.com> <1141347357.7715461.1391450158545.JavaMail.root@redhat.com> <20140203181317.GA13634@redhat.com> Message-ID: <20140204041102.GA13001@tesla.redhat.com> On Mon, Feb 03, 2014 at 01:13:17PM -0500, Lars Kellogg-Stedman wrote: > On Mon, Feb 03, 2014 at 12:55:58PM -0500, Tzach Shefi wrote: > > Installing on Fedora 20, semi dist' got below error. > > Packstack with answer file started working ok than: > > [...] > > Cannot retrieve metalink for repository: fedora/20/x86_64. Please > > verify its path and try again > > (I'm moving this to rdo-list.) > > This seems like an error contacting a Fedora mirror. There are > occasional hiccups in the mirror network...maybe just try it again and > see if it works? > > If you encounter the error a second time, are you able to install > packages manually using "yum install"? Couple more things: - Maybe your yum metadata stale? try $ yum clean all and re-run your test. - Try to explicitly add the mirrors entry 66.135.62.201 mirrors.fedoraproject.org to /etc/hosts and re-run your test. -- /kashyap From kchamart at redhat.com Tue Feb 4 08:28:19 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 4 Feb 2014 13:58:19 +0530 Subject: [Rdo-list] Gentle reminder: test day in progress on #rdo, Freenode Message-ID: <20140204082819.GB25654@tesla.pnq.redhat.com> Heya, RDO test day[1] is in progress on #rdo, Freenode. If you have some cycles, please hop on to participate. Gentle Reminder: Zodbot (an instance of Supybot) is running for the test days duration to capture the IRC channel logs, so please try to be mindful of not posting any sensitive information (like passwords). Thanks. [1] http://openstack.redhat.com/RDO_test_day_Icehouse_milestone_2 -- /kashyap From yrabl at redhat.com Tue Feb 4 11:36:47 2014 From: yrabl at redhat.com (Yogev Rabl) Date: Tue, 4 Feb 2014 06:36:47 -0500 (EST) Subject: [Rdo-list] MariaDB installation fails In-Reply-To: <1550486925.8876664.1391513554455.JavaMail.root@redhat.com> Message-ID: <1624950116.8878001.1391513807158.JavaMail.root@redhat.com> Hi, I'm installing a semi distributed topology of RDO on Fedora 20. The topology is: - server 1: Cloud Controller - server 2: Cinder. - server 3: Glance. - server 4: Nova-compute. The packstack installation fails with the error: 2014-02-04 12:34:54::ERROR::run_setup::912::root:: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 907, in main _main(confFile) File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 573, in _main runSequences() File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 552, in runSequences controller.runAllSequences() File "/usr/lib/python2.7/site-packages/packstack/installer/setup_controller.py", line 84, in runAllSequences sequence.run(self.CONF) File "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", line 105, in run step.run(config=config) File "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", line 52, in run raise SequenceError(str(ex)) SequenceError: Error appeared during Puppet run: _mysql.pp Error: Could not start Service[mysqld]: Execution of '/sbin/service mariadb start' returned 1: ESC[0m You will find full trace in log /var/tmp/packstack/20140204-122810-RehBuC/manifests/_mysql.pp.log The the manifest log shows that some stages failed due to'Skipping because of failed dependencies'. Is there a known workaround? thanks, Yogev From dron at redhat.com Tue Feb 4 11:54:16 2014 From: dron at redhat.com (Dafna Ron) Date: Tue, 04 Feb 2014 11:54:16 +0000 Subject: [Rdo-list] MariaDB installation fails In-Reply-To: <1624950116.8878001.1391513807158.JavaMail.root@redhat.com> References: <1624950116.8878001.1391513807158.JavaMail.root@redhat.com> Message-ID: <52F0D4E8.7050300@redhat.com> You had the same last night - can you please help Yogev? Thanks, Dafna On 02/04/2014 11:36 AM, Yogev Rabl wrote: > Hi, > > I'm installing a semi distributed topology of RDO on Fedora 20. > The topology is: > - server 1: Cloud Controller > - server 2: Cinder. > - server 3: Glance. > - server 4: Nova-compute. > > The packstack installation fails with the error: > 2014-02-04 12:34:54::ERROR::run_setup::912::root:: Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 907, in main > _main(confFile) > File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 573, in _main > runSequences() > File "/usr/lib/python2.7/site-packages/packstack/installer/run_setup.py", line 552, in runSequences > controller.runAllSequences() > File "/usr/lib/python2.7/site-packages/packstack/installer/setup_controller.py", line 84, in runAllSequences > sequence.run(self.CONF) > File "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", line 105, in run > step.run(config=config) > File "/usr/lib/python2.7/site-packages/packstack/installer/core/sequences.py", line 52, in run > raise SequenceError(str(ex)) > SequenceError: Error appeared during Puppet run: _mysql.pp > Error: Could not start Service[mysqld]: Execution of '/sbin/service mariadb start' returned 1: ESC[0m > You will find full trace in log /var/tmp/packstack/20140204-122810-RehBuC/manifests/_mysql.pp.log > > The the manifest log shows that some stages failed due to'Skipping because of failed dependencies'. > Is there a known workaround? > > thanks, > Yogev > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Dafna Ron From jeckersb at redhat.com Tue Feb 4 17:28:31 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Tue, 04 Feb 2014 12:28:31 -0500 Subject: [Rdo-list] Concerning Rabbits Message-ID: <871tzihjnk.fsf@redhat.com> (In the spirit of "Concerning Hobbits") Ryan O'Hara and I have been investigating RabbitMQ as it pertains to RDO recently. There has been a lot of discussion on several disparate threads, so I wanted to try and capture it on the list for the benefit of everyone. Ryan has been working on getting RabbitMQ running in a multi-node HA configuration. I won't steal his thunder, and he can speak to it better than I can, so I'll defer to him on the details. As for me, I've been working on el7 support and bug squashing along the way. The first bug[1] causes the daemon to load incredibly slow, or outright fail by timing out. This is due to the SELinux policy disallowing name_bind on ports lower than 32768. RabbitMQ tries to name_bind to a port starting at 10000, and increments if it fails. So if you have SELinux in enforcing mode, you'll get 22768 AVC denials in the log before it finally starts. The second bug[2] causes the daemon to intermittently fail to start due to a race condition in the creation of the erlang cookie file. This happens only the first time the service starts. Really this is an Erlang bug, but there's a workaround for the RabbitMQ case. I've submitted patches for both issues. Until those get merged in, I've rebuilt[3] RabbitMQ for F20 which includes the fixes. Beyond bugs, I've also built out RabbitMQ and all the build/runtime dependencies for el7. I have a yum repo[4] on my fedorapeople page containing all the bits. This is all the stuff that is presently missing from EPEL7. In time, I would hope the maintainers build all this stuff, but for now it'll work for testing. You will also need the EPEL 7 Beta repository[5] enabled. As a side note, I built everything using mock with a local override repo on my workstation. I've not used copr before but it seems relevant to this sort of thing, so if it's any benefit I'll look to rebuilt the el7 stack there for easier consumption. Hopefully this helps get the discussion into one place, and provide a baseline for further investigation by everyone interested in RabbitMQ. John. --- [1] Is really two bugzillas, but the same bug: [1a] https://bugzilla.redhat.com/show_bug.cgi?id=998682 [1b] https://bugzilla.redhat.com/show_bug.cgi?id=1032595 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1059913 [3] http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm [4] http://jeckersb.fedorapeople.org/rabbitmq-el7/ [5] http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/ From rohara at redhat.com Tue Feb 4 17:44:20 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Tue, 4 Feb 2014 11:44:20 -0600 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <871tzihjnk.fsf@redhat.com> References: <871tzihjnk.fsf@redhat.com> Message-ID: <20140204174420.GB19248@redhat.com> On Tue, Feb 04, 2014 at 12:28:31PM -0500, John Eckersberg wrote: > (In the spirit of "Concerning Hobbits") Thanks for kicking off this thread. > Ryan O'Hara and I have been investigating RabbitMQ as it pertains to RDO > recently. There has been a lot of discussion on several disparate > threads, so I wanted to try and capture it on the list for the benefit > of everyone. > > Ryan has been working on getting RabbitMQ running in a multi-node HA > configuration. I won't steal his thunder, and he can speak to it better > than I can, so I'll defer to him on the details. Right now I have a 3-node RabbitMQ cluster with mirrored queues. I also put haproxy in front of this cluster and pointed all relevant OpenStack services at the virtual IP address. This seems to work well so far. Details instructions coming soon. > As for me, I've been working on el7 support and bug squashing along the > way. > > The first bug[1] causes the daemon to load incredibly slow, or outright > fail by timing out. This is due to the SELinux policy disallowing > name_bind on ports lower than 32768. RabbitMQ tries to name_bind to a > port starting at 10000, and increments if it fails. So if you have > SELinux in enforcing mode, you'll get 22768 AVC denials in the log > before it finally starts. > > The second bug[2] causes the daemon to intermittently fail to start due > to a race condition in the creation of the erlang cookie file. This > happens only the first time the service starts. Really this is an > Erlang bug, but there's a workaround for the RabbitMQ case. > > I've submitted patches for both issues. Until those get merged in, I've > rebuilt[3] RabbitMQ for F20 which includes the fixes. Awesome. > Beyond bugs, I've also built out RabbitMQ and all the build/runtime > dependencies for el7. I have a yum repo[4] on my fedorapeople page > containing all the bits. This is all the stuff that is presently > missing from EPEL7. In time, I would hope the maintainers build all > this stuff, but for now it'll work for testing. You will also need the > EPEL 7 Beta repository[5] enabled. > > As a side note, I built everything using mock with a local override repo > on my workstation. I've not used copr before but it seems relevant to > this sort of thing, so if it's any benefit I'll look to rebuilt the el7 > stack there for easier consumption. > > Hopefully this helps get the discussion into one place, and provide a > baseline for further investigation by everyone interested in RabbitMQ. John and I will be putting all of this in a wiki page on the RDO website in the very near future. I'll send email to the list when it is ready to be reviewed. Ryan > John. > > --- > [1] Is really two bugzillas, but the same bug: > [1a] https://bugzilla.redhat.com/show_bug.cgi?id=998682 > [1b] https://bugzilla.redhat.com/show_bug.cgi?id=1032595 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1059913 > [3] http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm > [4] http://jeckersb.fedorapeople.org/rabbitmq-el7/ > [5] http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From gfidente at redhat.com Wed Feb 5 03:19:53 2014 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 05 Feb 2014 04:19:53 +0100 Subject: [Rdo-list] MariaDB installation fails In-Reply-To: <52F0D4E8.7050300@redhat.com> References: <1624950116.8878001.1391513807158.JavaMail.root@redhat.com> <52F0D4E8.7050300@redhat.com> Message-ID: <52F1ADD9.8020006@redhat.com> On 02/04/2014 12:54 PM, Dafna Ron wrote: > You had the same last night - can you please help Yogev? > > On 02/04/2014 11:36 AM, Yogev Rabl wrote: > >> Error: Could not start Service[mysqld]: Execution of '/sbin/service >> mariadb start' returned 1: ESC[0m this is a "known issue" but due to multiple bugs, #1061045 and #981116; I think the workaround in the etherpad(1) was missing at least the selinux step, needed if you have selinux in enforcing mode the full sequence for me sums up to: """ # yum install -y mariadb-server # chown mysql:mysql /var/log/mariadb/mariadb.log # touch /var/log/mysqld.log # chcon -u system_u -r object_r -t mysqld_log_t /var/log/mysqld.log # chown mysql:mysql /var/log/mysqld.log # rm /usr/lib/systemd/system/mysqld.service # cp /usr/lib/systemd/system/mariadb.service /usr/lib/systemd/system/mysqld.service """ 1. https://etherpad.openstack.org/p/rdo_test_day_feb_2014 -- Giulio Fidente GPG KEY: 08D733BA | IRC: giulivo From yrabl at redhat.com Wed Feb 5 14:56:17 2014 From: yrabl at redhat.com (Yogev Rabl) Date: Wed, 5 Feb 2014 09:56:17 -0500 (EST) Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <549557633.9479813.1391612097313.JavaMail.root@redhat.com> Message-ID: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> Hi, While trying to debug a problem we've noticed that the libvirt doen't write logs. Please noticed that, and chnage the configuration of the libvirtd.conf. thanks, Yogev From yrabl at redhat.com Wed Feb 5 15:13:56 2014 From: yrabl at redhat.com (Yogev Rabl) Date: Wed, 5 Feb 2014 10:13:56 -0500 (EST) Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> Message-ID: <935139722.9492285.1391613236519.JavaMail.root@redhat.com> bug opened https://bugzilla.redhat.com/show_bug.cgi?id=1061753 ----- Original Message ----- From: "Yogev Rabl" To: rdo-list at redhat.com Sent: Wednesday, February 5, 2014 4:56:17 PM Subject: [Rdo-list] libvirtd default configuration Hi, While trying to debug a problem we've noticed that the libvirt doen't write logs. Please noticed that, and chnage the configuration of the libvirtd.conf. thanks, Yogev _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From kchamart at redhat.com Wed Feb 5 17:45:47 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 5 Feb 2014 23:15:47 +0530 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <935139722.9492285.1391613236519.JavaMail.root@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> Message-ID: <20140205174547.GA25704@tesla.pnq.redhat.com> On Wed, Feb 05, 2014 at 10:13:56AM -0500, Yogev Rabl wrote: > bug opened https://bugzilla.redhat.com/show_bug.cgi?id=1061753 This is not a bug. Even though the configuration looks empty, the *default* log_level is 3, i.e. warnings and errors, this is redirected to systemd journal (on systems that are run systemd, and to syslog on older systems). You can notice this when you invoke: $ systemctl status libvirtd libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Wed 2014-02-05 22:45:05 IST; 11s ago Main PID: 32521 (libvirtd) CGroup: /system.slice/libvirtd.service ?? 821 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf ??32521 /usr/sbin/libvirtd Feb 05 22:45:05 foohost.com systemd[1]: Started Virtualization daemon. Feb 05 22:45:06 foohost.com dnsmasq[821]: read /etc/hosts - 2 addresses Feb 05 22:45:06 foohost.com dnsmasq[821]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Feb 05 22:45:06 foohost.com dnsmasq-dhcp[821]: read /var/lib/libvirt/dnsmasq/default.hostsfile Furthermore, there's a couple of different ways to enable various log levels based on filters, etc. - If you want logs to be redirected to a file, that can be expressed in /etc/libvirt/libvirtd.conf. To log _everything_ (this spews LOTS of details, fills up your disk), add these log_level = 1 log_outputs = 1:file:/var/tmp/libvirtd.log to /etc/libvirt/libvirtd.conf, restart libvirtd. - Alternatively, you can set environment variables (if set, this will take precedence over values specified in the configuration file): $ export LIBVIRT_DEBUG=1 $ export LIBVIRT_TRACE=1 More extensive details on logging filters are available here: http://libvirt.org/logging.html -- /kashyap From dron at redhat.com Wed Feb 5 18:23:53 2014 From: dron at redhat.com (Dafna Ron) Date: Wed, 05 Feb 2014 18:23:53 +0000 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <20140205174547.GA25704@tesla.pnq.redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> <20140205174547.GA25704@tesla.pnq.redhat.com> Message-ID: <52F281B9.8040405@redhat.com> This is a bug since we install with packstack which need to configure this in the deployment and up until now was configured. libvirtd log should be created and should be in debug level when installing with packstack. If this did not happen - it's a bug and user should not configure it manually. Dafna On 02/05/2014 05:45 PM, Kashyap Chamarthy wrote: > On Wed, Feb 05, 2014 at 10:13:56AM -0500, Yogev Rabl wrote: >> bug opened https://bugzilla.redhat.com/show_bug.cgi?id=1061753 > This is not a bug. > > Even though the configuration looks empty, the *default* log_level is 3, > i.e. warnings and errors, this is redirected to systemd journal (on > systems that are run systemd, and to syslog on older systems). You can > notice this when you invoke: > > $ systemctl status libvirtd > libvirtd.service - Virtualization daemon > Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) > Active: active (running) since Wed 2014-02-05 22:45:05 IST; 11s ago > Main PID: 32521 (libvirtd) > CGroup: /system.slice/libvirtd.service > ?? 821 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf > ??32521 /usr/sbin/libvirtd > > Feb 05 22:45:05 foohost.com systemd[1]: Started Virtualization daemon. > Feb 05 22:45:06 foohost.com dnsmasq[821]: read /etc/hosts - 2 addresses > Feb 05 22:45:06 foohost.com dnsmasq[821]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses > Feb 05 22:45:06 foohost.com dnsmasq-dhcp[821]: read /var/lib/libvirt/dnsmasq/default.hostsfile > > > Furthermore, there's a couple of different ways to enable various log > levels based on filters, etc. > > - If you want logs to be redirected to a file, that can be expressed in > /etc/libvirt/libvirtd.conf. > > To log _everything_ (this spews LOTS of details, fills up your disk), > add these > > log_level = 1 > log_outputs = 1:file:/var/tmp/libvirtd.log > > to /etc/libvirt/libvirtd.conf, restart libvirtd. > > - Alternatively, you can set environment variables (if set, this will > take precedence over values specified in the configuration file): > > $ export LIBVIRT_DEBUG=1 > $ export LIBVIRT_TRACE=1 > > More extensive details on logging filters are available here: > > http://libvirt.org/logging.html > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Dafna Ron From pmyers at redhat.com Wed Feb 5 18:28:07 2014 From: pmyers at redhat.com (Perry Myers) Date: Wed, 05 Feb 2014 13:28:07 -0500 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <52F281B9.8040405@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> <20140205174547.GA25704@tesla.pnq.redhat.com> <52F281B9.8040405@redhat.com> Message-ID: <52F282B7.30302@redhat.com> On 02/05/2014 01:23 PM, Dafna Ron wrote: > This is a bug since we install with packstack which need to configure > this in the deployment and up until now was configured. > libvirtd log should be created and should be in debug level when > installing with packstack. > If this did not happen - it's a bug and user should not configure it > manually. libvirtd should never be configured in debug mode by default. Please see danpb's rationale here: https://bugzilla.redhat.com/show_bug.cgi?id=1061753#c3 If someone wanted to change packstack to expose a configuration option to set libvirtd to more verbose logging (but not debug mode) if that config param is set, that would be fine. But under no circumstances should it be defaulted to debug enabled. From dron at redhat.com Wed Feb 5 18:44:34 2014 From: dron at redhat.com (Dafna Ron) Date: Wed, 05 Feb 2014 18:44:34 +0000 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <52F282B7.30302@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> <20140205174547.GA25704@tesla.pnq.redhat.com> <52F281B9.8040405@redhat.com> <52F282B7.30302@redhat.com> Message-ID: <52F28692.8030208@redhat.com> Perry, in current setups nothing is logged at all, i.e there is no libvirt log until we configure the conf file to log something... And sorry, but I disagree with Dan... working on ovirt for a few years I can tell you that the libvirt logs are in debug and are simply rotated and deleted if space is run out or after 30 days. These are simple debugging tools that admin's need and if someone decided that they don't need them - that's ok, but our premise should be that we want to catch events since they cannot always be reproduced... this is from an ovirt host where log level is debug and logs are rotated: 2014-02-05 17:07:31.556+0000: 31964: debug : virConnectNumOfDomains:1855 : conn=0x7fe7140008c0 2014-02-05 17:07:35.775+0000: 31958: debug : virConnectCompareCPU:17082 : conn=0x7fe7140008c0, xmlDesc=HaswellIntel, flags=0 2014-02-05 17:07:35.786+0000: 31959: debug : virConnectCompareCPU:17082 : conn=0x7fe7140008c0, xmlDesc=athlonAMD, flags=0 2014-02-05 17:07:35.787+0000: 31966: debug : virConnectCompareCPU:17082 : conn=0x7fe7140008c0, xmlDesc=NehalemIntel, flags=0 2014-02-05 17:07:35.792+0000: 31963: debug : virConnectCompareCPU:17082 : conn=0x7fe7140008c0, xmlDesc=ConroeIntel, flags= root at XXX-XXXX ~]# ls -l /var/log/libvirt/ total 34384 -rw-------. 1 root root 14934506 Feb 5 19:07 libvirtd.log -rw-------. 1 root root 52868 Jan 1 10:40 libvirtd.log.100.xz -rw-------. 1 root root 150216 Jan 6 08:25 libvirtd.log.10.xz -rw-------. 1 root root 149048 Jan 6 06:55 libvirtd.log.11.xz -rw-------. 1 root root 148444 Jan 6 05:25 libvirtd.log.12.xz -rw-------. 1 root root 149484 Jan 6 03:55 libvirtd.log.13.xz -rw-------. 1 root root 150536 Jan 6 02:25 libvirtd.log.14.xz -rw-------. 1 root root 149948 Jan 6 00:55 libvirtd.log.15.xz -rw-------. 1 root root 148748 Jan 5 23:25 libvirtd.log.16.xz -rw-------. 1 root root 149548 Jan 5 21:55 libvirtd.log.17.xz -rw-------. 1 root root 149428 Jan 5 20:25 libvirtd.log.18.xz -rw-------. 1 root root 149284 Jan 5 18:55 libvirtd.log.19.xz -rw-------. 1 root root 601460 Jan 23 14:25 libvirtd.log.1.xz -rw-------. 1 root root 265132 Dec 15 03:48 libvirtd.log-20131215.xz -rw-------. 1 root root 148028 Jan 3 03:09 libvirtd.log-20140103.xz -rw-------. 1 root root 148916 Jan 5 17:25 libvirtd.log.20.xz -rw-------. 1 root root 149680 Jan 5 15:55 libvirtd.log.21.xz -rw-------. 1 root root 244180 Jan 5 14:25 libvirtd.log.22.xz -rw-------. 1 root root 164292 Jan 5 13:10 libvirtd.log.23.xz -rw-------. 1 root root 156920 Jan 5 11:55 libvirtd.log.24.xz -rw-------. 1 root root 148544 Jan 5 10:40 libvirtd.log.25.xz -rw-------. 1 root root 149344 Jan 5 09:10 libvirtd.log.26.xz On 02/05/2014 06:28 PM, Perry Myers wrote: > On 02/05/2014 01:23 PM, Dafna Ron wrote: >> This is a bug since we install with packstack which need to configure >> this in the deployment and up until now was configured. >> libvirtd log should be created and should be in debug level when >> installing with packstack. >> If this did not happen - it's a bug and user should not configure it >> manually. > libvirtd should never be configured in debug mode by default. Please > see danpb's rationale here: > > https://bugzilla.redhat.com/show_bug.cgi?id=1061753#c3 > > If someone wanted to change packstack to expose a configuration option > to set libvirtd to more verbose logging (but not debug mode) if that > config param is set, that would be fine. But under no circumstances > should it be defaulted to debug enabled. From pmyers at redhat.com Wed Feb 5 18:56:50 2014 From: pmyers at redhat.com (Perry Myers) Date: Wed, 05 Feb 2014 13:56:50 -0500 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <52F28692.8030208@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> <20140205174547.GA25704@tesla.pnq.redhat.com> <52F281B9.8040405@redhat.com> <52F282B7.30302@redhat.com> <52F28692.8030208@redhat.com> Message-ID: <52F28972.2010902@redhat.com> On 02/05/2014 01:44 PM, Dafna Ron wrote: > Perry, > > in current setups nothing is logged at all, i.e there is no libvirt log > until we configure the conf file to log something... See Kashyap's earlier email. By default libvirt logs to syslog with a default log level of 3, which logs warnings and errors. If you are seeing no warnings or errors, then that's a good thing and under normal circumstances should be sufficient. Increasing the verbosity to log Info (or Debug) should be something the system administrator CHOOSES to do, not something a tool like packstack forces you to do. See my comment on that bug. I fully support making it easy for packstack to increase the verbosity of libvirt logging, but agree with Dan and others that increasing it by default is make a decision for sysadmins when they should be making the decision themselves. Perry From dron at redhat.com Wed Feb 5 19:14:07 2014 From: dron at redhat.com (Dafna Ron) Date: Wed, 05 Feb 2014 19:14:07 +0000 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <52F28972.2010902@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> <20140205174547.GA25704@tesla.pnq.redhat.com> <52F281B9.8040405@redhat.com> <52F282B7.30302@redhat.com> <52F28692.8030208@redhat.com> <52F28972.2010902@redhat.com> Message-ID: <52F28D7F.3080909@redhat.com> perhaps we have a misunderstanding... yogev wrote in the bug: configuration doesn't write logs -> no libvirt log exists :) libvirtd.conf currently has # for any logging - which means nothing is logged for libvirt, no debug, info, warn or error will be logged at all because the logs for libvirt are disabled. we did have an error where a libvirt domain was killed and I wanted to look at the libvirt log to see what happened and this is when we found no log exists... after we manually changed the libvirtd.conf file a libvirt log was created and than we also changed the log level to debug so that we can debug something :) basically, we can debate the debug log level issue but can we agree that a log should exist and log erros? On 02/05/2014 06:56 PM, Perry Myers wrote: > On 02/05/2014 01:44 PM, Dafna Ron wrote: >> Perry, >> >> in current setups nothing is logged at all, i.e there is no libvirt log >> until we configure the conf file to log something... > See Kashyap's earlier email. By default libvirt logs to syslog with a > default log level of 3, which logs warnings and errors. > > If you are seeing no warnings or errors, then that's a good thing and > under normal circumstances should be sufficient. > > Increasing the verbosity to log Info (or Debug) should be something the > system administrator CHOOSES to do, not something a tool like packstack > forces you to do. > > See my comment on that bug. I fully support making it easy for > packstack to increase the verbosity of libvirt logging, but agree with > Dan and others that increasing it by default is make a decision for > sysadmins when they should be making the decision themselves. > > Perry -- Dafna Ron From kchamart at redhat.com Wed Feb 5 21:47:35 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 6 Feb 2014 03:17:35 +0530 Subject: [Rdo-list] libvirtd default configuration In-Reply-To: <52F28D7F.3080909@redhat.com> References: <1193935988.9480457.1391612177482.JavaMail.root@redhat.com> <935139722.9492285.1391613236519.JavaMail.root@redhat.com> <20140205174547.GA25704@tesla.pnq.redhat.com> <52F281B9.8040405@redhat.com> <52F282B7.30302@redhat.com> <52F28692.8030208@redhat.com> <52F28972.2010902@redhat.com> <52F28D7F.3080909@redhat.com> Message-ID: <20140205214735.GA13182@tesla.redhat.com> On Wed, Feb 05, 2014 at 07:14:07PM +0000, Dafna Ron wrote: > perhaps we have a misunderstanding... yogev wrote in the bug: > configuration doesn't write logs -> no libvirt log exists :) > libvirtd.conf currently has # for any logging - which means nothing > is logged for libvirt, no debug, info, warn or error will be logged > at all because the logs for libvirt are disabled. > we did have an error where a libvirt domain was killed and I wanted > to look at the libvirt log to see what happened and this is when we > found no log exists... > after we manually changed the libvirtd.conf file a libvirt log was > created and than we also changed the log level to debug so that we > can debug something :) Agreed, this can be a little frustrating during full crashes. I noted some more points in the bug, to keep discussions/questions intact, further discussions please redirect there. > basically, we can debate the debug log level issue but can we agree > that a log should exist and log erros? Please note: - You can do "tail -f foo.log" style monitoring, by default, for libvirtd service: $ journalctl -f --unit=libvirtd So, you should at-least see some kind of errors, warnings there. - And, Some amount of guest specific logs go -- /var/log/libvirt/qemu/guest.log > On 02/05/2014 06:56 PM, Perry Myers wrote: > >On 02/05/2014 01:44 PM, Dafna Ron wrote: > >>Perry, > >> > >>in current setups nothing is logged at all, i.e there is no libvirt log > >>until we configure the conf file to log something... > >See Kashyap's earlier email. By default libvirt logs to syslog with a > >default log level of 3, which logs warnings and errors. > > > >If you are seeing no warnings or errors, then that's a good thing and > >under normal circumstances should be sufficient. > > > >Increasing the verbosity to log Info (or Debug) should be something the > >system administrator CHOOSES to do, not something a tool like packstack > >forces you to do. > > > >See my comment on that bug. I fully support making it easy for > >packstack to increase the verbosity of libvirt logging, but agree with > >Dan and others that increasing it by default is make a decision for > >sysadmins when they should be making the decision themselves. > > > >Perry > > > -- > Dafna Ron > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- /kashyap From daniel at speichert.pl Thu Feb 6 02:04:52 2014 From: daniel at speichert.pl (Daniel Speichert) Date: Thu, 6 Feb 2014 03:04:52 +0100 (CET) Subject: [Rdo-list] Glance does not save properties of images In-Reply-To: <20140203074617.GA23799@redhat.com> References: <13085627.1386.1390837182915.JavaMail.Daniel@Daniel-PC> <20140128083727.GA13030@redhat.com> <8751501.1837.1391005828950.JavaMail.Daniel@Daniel-PC> <699972937.6658373.1391084578489.JavaMail.root@redhat.com> <24471429.2142.1391097660295.JavaMail.Daniel@Daniel-PC> <504179335.7393210.1391355829082.JavaMail.root@redhat.com> <13870245.2743.1391403509394.JavaMail.Daniel@Daniel-PC> <20140203074617.GA23799@redhat.com> Message-ID: <30959072.3532.1391652260070.JavaMail.Daniel@Daniel-PC> Hello everyone, Thanks for you help with that problem. I was able to resolve it by examining all headers of requests and responses. It turns out that the issue is not entirely Glance's fault and definitely not a problem of a package. The properties that I was missing were passed through headers with _ (underscore) in their names: x-image-meta-container_format: bare x-image-meta-disk_format: qcow2 x-image-meta-is_public: True These were the only ones that got lost. I also incorrectly assumed that Glance would not accept a request to create an image without required parameters disk-format and container-format. It does accept it, this property is only enforced in Horizon. CLI also doesn't care if you pass them or not. The ultimate root of the problem was the fact that I use nginx in front of the controller as SSL terminating proxy. Nginx is smart enough to reject headers with underscore in their name as invalid HTTP headers. This way the headers mentioned above were stripped and the rest went through. Too bad that nginx is so secret about doing so. There is a simple fix in case anyone has the same problem: http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers I submitted a bug report for Glance on that: https://bugs.launchpad.net/glance/+bug/1276887 Sorry for all the trouble related to debugging the issue and thanks again! Best Regards, Daniel Speichert ----- Original Message ----- > From: "Flavio Percoco" > To: "Daniel Speichert" > Cc: "Tzach Shefi" , rdo-list at redhat.com > Sent: Monday, February 3, 2014 2:46:17 AM > Subject: Re: [Rdo-list] Glance does not save properties of images > > On 03/02/14 05:59 +0100, Daniel Speichert wrote: > >Yes, I tested that with an admin user. The bigger problem that > >is_public is that required disk_format and container_format are > >lost too. > > This sounds like a bigger problem, though. > > What do you mean with `disk_format` and `container_format` are lost? > for what command? Could you please share the exact command you're > running? > > Notice that some of these parameters are available at creation time > but not as part of updates. That said, I recall you said it used to > work and stopped working after you updated / reinstalled your system. > > Thanks! > flaper > > > > >Regards, > >Daniel Speichert > > > >----- Original Message ----- > >> From: "Tzach Shefi" > >> To: "Daniel Speichert" > >> Cc: "Flavio Percoco" , rdo-list at redhat.com > >> Sent: Sunday, February 2, 2014 10:43:49 AM > >> Subject: Re: [Rdo-list] Glance does not save properties of images > >> > >> Unfortunately don't have other ideas, I'll try a setup with Rabbit > >> just to make sure this isn't the cause. > >> > >> A long shoot but i'll ask any way the user that set is_public > >> param, > >> has admin permissions right? > >> http://docs.openstack.org/developer/glance/glanceapi.html > >> > >> "Use of the is_public parameter is restricted to admin users. For > >> all > >> other users it will be ignored." > >> > >> Regards, > >> Tzach > >> > >> > >> ----- Original Message ----- > >> From: "Daniel Speichert" > >> To: "Tzach Shefi" > >> Cc: "Flavio Percoco" , rdo-list at redhat.com > >> Sent: Thursday, January 30, 2014 6:01:26 PM > >> Subject: Re: [Rdo-list] Glance does not save properties of images > >> > >> Yes, Ceph is working well as backend (images are uploaded there > >> successfully and retrieved as well). > >> The same problem occurs also when the image is kept locally > >> (default_store = file). > >> > >> We have checked multiple images and the problem is the same. > >> Interestingly, we had the same setup before (we reinstalled last > >> week) and it was working fine. That makes me think that something > >> might be different in the package. > >> > >> We use rabbit and it seems to be working fine - ceilometer is > >> getting > >> these notifications. > >> > >> Do you have any other ideaas? I'm running the latest package > >> version > >> form RDO. > >> > >> Thanks, > >> Daniel Speichert > >> > >> ----- Original Message ----- > >> > From: "Tzach Shefi" > >> > To: "Daniel Speichert" > >> > Cc: "Flavio Percoco" , rdo-list at redhat.com > >> > Sent: Thursday, January 30, 2014 7:22:58 AM > >> > Subject: Re: [Rdo-list] Glance does not save properties of > >> > images > >> > > >> > Hello Daniel, > >> > > >> > On your posted glance-api.con noticed default_store = rbd, > >> > assuming > >> > Glance back end is CEPH (correct?). > >> > > >> > Just installed AIO RDO Havana on RHEL 6.5. > >> > Is_public updated successfully via Horizon\CLI, image shows up > >> > as > >> > public on Horizon\image-show. > >> > > >> > > >> > I'm assuming (the obvious..) that you checked this on more than > >> > just > >> > that one image, eliminates a single bad image as source of this > >> > problem. > >> > BTW what type\source of image was used\checked? > >> > > >> > Another difference in setups mine notifier_strategy =QPID your's > >> > uses > >> > rabbit. > >> > > >> > Regards, > >> > Tzach > >> > > >> > ----- Original Message ----- > >> > From: "Daniel Speichert" > >> > To: "Flavio Percoco" > >> > Cc: rdo-list at redhat.com > >> > Sent: Wednesday, January 29, 2014 4:30:56 PM > >> > Subject: [Rdo-list] Glance does not save properties of images > >> > > >> > ----- Original Message ----- > >> > > From: "Flavio Percoco" > >> > > To: "Daniel Speichert" > >> > > Cc: "Lars Kellogg-Stedman" , > >> > > rdo-list at redhat.com > >> > > Sent: Tuesday, January 28, 2014 3:37:27 AM > >> > > Subject: Re: [Rdo-list] Glance does not save properties of > >> > > images > >> > > > >> > > > >> > > Could you please share the commands your using? > >> > > > >> > > - How are you adding the properties? > >> > > - How are you reading the propoerties? > >> > > > >> > > You're sharing your registry.log configurations which means > >> > > you've > >> > > configured the registry. Could you share how you did that? > >> > > > >> > > Thanks, > >> > > flaper > >> > > > >> > > >> > These are required properties when adding any Glance image > >> > (disk_format and container_format). I was trying to set > >> > is_public > >> > through 'glance image-edit' and Horizon. The change seems to be > >> > accepted (no error) but on subsequent image-show it's just not > >> > there. > >> > > >> > Registry and API are set up almost by defaults: > >> > glance-registry.conf: http://pastebin.com/M7Ybjnf9 > >> > glance-api.conf: http://pastebin.com/5sPXr3mG > >> > > >> > Regards, > >> > Daniel Speichert > >> > > >> > _______________________________________________ > >> > Rdo-list mailing list > >> > Rdo-list at redhat.com > >> > https://www.redhat.com/mailman/listinfo/rdo-list > >> > > >> > > -- > @flaper87 > Flavio Percoco > From kchamart at redhat.com Thu Feb 6 07:01:39 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 6 Feb 2014 12:31:39 +0530 Subject: [Rdo-list] RDO test day (4,5-FEB-2014) -- IRC meeting minutes log Message-ID: <20140206070139.GA23049@tesla.pnq.redhat.com> ================================ #rdo: RDO test day; 4,5-FEB-2014 ================================ Thanks everyone for participating in the test days, here's the summary (& URL to full logs) of IRC meeting minutes over the last 2 days. Meeting started by kashyap at 07:57:42 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2014-02-04/rdo-test-day-5feb2014.2014-02-04-07.57.log.html . Meeting summary --------------- * RDO-test-day-5FEB2014 (kashyap, 07:58:11) * LINK: https://etherpad.openstack.org/p/rdo_test_day_feb_2014 (kashyap, 08:52:00) * LINK: http://adam.younglogic.com/2013/07/troubleshooting-pki-middleware/ (kashyap, 09:11:04) * To debug Neutron issues -- refer lines 7 to 13 here: https://etherpad.openstack.org/p/rdo_test_day_feb_2014 (kashyap, 09:16:43) * LINK: Workarounds page http://openstack.redhat.com/Workarounds_2014_02 (kashyap, 10:36:42) * LINK: http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/ ??? (afazekas, 10:47:04) * For all new workarounds, please add them here -- http://openstack.redhat.com/Workarounds_2014_02 (kashyap, 10:49:26) * LINK: http://pastebin.com/ydPPezCg (ohochman, 11:38:47) * LINK: http://openstack.redhat.com/TestedSetups#Advanced_Installs_.28Foreman_Based.29_--_Work_in_Progress (jayg, 14:39:59) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1017210 (jayg, 14:40:06) * LINK: http://titanpad.com/k3fUnioHQN (lon, 14:56:58) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1061045 (weshay, 15:05:58) * LINK: http://grokbase.com/p/gg/puppet-users/1387v5yek8/puppet-first-run-timing-out (jayg, 15:59:32) * LINK: http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/ I do not see new swift package for el6 (afazekas, 16:38:08) * LINK: http://docs.openstack.org/user-guide-admin/content/specify-host-to-boot-instances-on.html (anand, 12:14:39) * [Heads-up] There's a regression in libvirt F20 (_if_ you're using polkit ACLS. Shouldn't affect the test day) -- https://bugzilla.redhat.com/show_bug.cgi?id=1058839 (kashyap, 14:03:58) * LINK: http://pastebin.com/xUZJJG5a (ohochman, 15:08:17) Meeting ended at 05:33:13 UTC. Action Items ------------ Action Items, by person ----------------------- * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * ohochman (110) * kashyap (94) * jayg (64) * apevec (42) * lon (40) * rook (34) * panda (33) * ayoung (29) * ukalifon (27) * afazekas (22) * yfried (21) * psedlak (16) * verdurin (15) * larsks (14) * mmagr (12) * Pursuit[LT] (10) * morazi (10) * jistr (10) * anand (7) * nlevinki (7) * beagles (7) * giulivo (6) * weshay (6) * kitp (5) * blinky_ghost (5) * nmagnezi (5) * mflobo (5) * pixelb (5) * yrabl (5) * DG_ (4) * mpavlase (4) * ranjan (4) * xqueralt (3) * zodbot (3) * defishguy (3) * tshefi (2) * oblaut (2) * ajeain (1) * eharney (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot -- /kashyap From kchamart at redhat.com Thu Feb 6 07:33:18 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 6 Feb 2014 13:03:18 +0530 Subject: [Rdo-list] List of bugs logged during IceHouse-M2 RDO test days[4, 5 FEB 2014] Message-ID: <20140206073318.GB22264@tesla.pnq.redhat.com> Heya, Below are bugs (18) posted during the test days (Feb 4,5) -- http://goo.gl/upTA2F. We ran this second test day from less than a month than the previous one, and ran into few blocker issues that consumed time in re-installatios/ future dependency resolutions. NOTES - Alan Pevec (upstream OpenStack stable branch maintainer) suggested to skip Milestone-2 during future releases as there's too much flux upstream, instead he proposed to go from Milestone-1 to Milestone-3 (6MAR2014, from the current release schedule) is more palatable as it'll be FeatureFreeze and StringFreeze. List of bugs in plain text, in case someone wants to comment in-line: ======================================================================= - 1061005 python-django-horizon https://bugzilla.redhat.com/show_bug.cgi?id=1061005 error page when clicking on the Orchestration -> Stacks link - 1061055 openstack-neutron https://bugzilla.redhat.com/show_bug.cgi?id=1061055 neutron-dhcp-agent dead - 1061137 openstack-keystone https://bugzilla.redhat.com/show_bug.cgi?id=1061137 Wrong exception raised when trying to create an existing tenant - 1061152 openstack-foreman-installer https://bugzilla.redhat.com/show_bug.cgi?id=1061152 [RDO][openstack-foreman-installer]: When attempting to deploy foreman-client on Fedora20 - It seems puppet fails to connect the mariadb. - 1061329 openstack-keystone https://bugzilla.redhat.com/show_bug.cgi?id=1061329 Keystone returns HTTP 400 as SQLAlchemy raises None exceptions - 1061343 python-django-horizon https://bugzilla.redhat.com/show_bug.cgi?id=1061343 horizon errors out when adding myself to a group - 1061349 openstack-neutron https://bugzilla.redhat.com/show_bug.cgi?id=1061349 neutron-dhcp-agent won't start due to a missing import of module named stevedore - 1061356 openstack-neutron https://bugzilla.redhat.com/show_bug.cgi?id=1061356 neutron-dhcp-agent fails to start with the error: Package 'openstack-neutron' isn't signed with proper key - 1061378 openstack-neutron https://bugzilla.redhat.com/show_bug.cgi?id=1061378 Neutron ML2 DB configuration fails with an error - 1061574 openstack-foreman-installer https://bugzilla.redhat.com/show_bug.cgi?id=1061574 foreman server installer script should be in PATH and have options - 1061613 openstack-foreman-installer https://bugzilla.redhat.com/show_bug.cgi?id=1061613 [RDO][Openstack-foreman-installer]: puppet throws errors during registration of clients against foreman-server. (during: foreman_client.sh) - 1061689 openstack-packstack https://bugzilla.redhat.com/show_bug.cgi?id=1061689 Horizon SSL is disabled by Nagios configuration via packstack - 1061710 openstack-packstack https://bugzilla.redhat.com/show_bug.cgi?id=1061710 openstack-cinder-backup service not enabled on boot - 1061750 openstack-neutron https://bugzilla.redhat.com/show_bug.cgi?id=1061750 Neutron Load Balancer VIP Creation Fails - 1061753 openstack-packstack https://bugzilla.redhat.com/show_bug.cgi?id=1061753 Create an option in packstack to increase verbosity level of libvirt - 1061760 openstack-foreman-installer https://bugzilla.redhat.com/show_bug.cgi?id=1061760 openstack-cinder-backup service is not enabled on boot - 1061768 openstack-keystone https://bugzilla.redhat.com/show_bug.cgi?id=1061768 Keystone-all - ImportError: cannot import name deploy - 1061818 python-django-horizon https://bugzilla.redhat.com/show_bug.cgi?id=1061818 when flavor is too small the error is not clear ======================================================================= -- /kashyap From kchamart at redhat.com Thu Feb 6 08:57:15 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 6 Feb 2014 14:27:15 +0530 Subject: [Rdo-list] List of bugs logged during IceHouse-M2 RDO test days[4, 5 FEB 2014] In-Reply-To: <20140206073318.GB22264@tesla.pnq.redhat.com> References: <20140206073318.GB22264@tesla.pnq.redhat.com> Message-ID: <20140206085715.GA28584@tesla.pnq.redhat.com> Attached (in plain text) are etherpad notes from the test days. /kashyap On Thu, Feb 06, 2014 at 01:03:18PM +0530, Kashyap Chamarthy wrote: > Heya, > > Below are bugs (18) posted during the test days (Feb 4,5) -- > http://goo.gl/upTA2F. We ran this second test day from less than a month > than the previous one, and ran into few blocker issues that consumed > time in re-installatios/ future dependency resolutions. > > NOTES > > - Alan Pevec (upstream OpenStack stable branch maintainer) suggested to > skip Milestone-2 during future releases as there's too much flux > upstream, instead he proposed to go from Milestone-1 to Milestone-3 > (6MAR2014, from the current release schedule) is more palatable as > it'll be FeatureFreeze and StringFreeze. > > > List of bugs in plain text, in case someone wants to comment in-line: > ======================================================================= > - 1061005 python-django-horizon > https://bugzilla.redhat.com/show_bug.cgi?id=1061005 error page when > clicking on the Orchestration -> Stacks link > > - 1061055 openstack-neutron > https://bugzilla.redhat.com/show_bug.cgi?id=1061055 > neutron-dhcp-agent dead > > - 1061137 openstack-keystone > https://bugzilla.redhat.com/show_bug.cgi?id=1061137 > Wrong exception raised when trying to create an existing tenant > > - 1061152 openstack-foreman-installer > https://bugzilla.redhat.com/show_bug.cgi?id=1061152 > [RDO][openstack-foreman-installer]: When attempting to deploy > foreman-client on Fedora20 - It seems puppet fails to connect the > mariadb. > > - 1061329 openstack-keystone > https://bugzilla.redhat.com/show_bug.cgi?id=1061329 > Keystone returns HTTP 400 as SQLAlchemy raises None exceptions > > - 1061343 python-django-horizon > https://bugzilla.redhat.com/show_bug.cgi?id=1061343 > horizon errors out when adding myself to a group > > - 1061349 openstack-neutron > https://bugzilla.redhat.com/show_bug.cgi?id=1061349 > neutron-dhcp-agent won't start due to a missing import of module > named stevedore > > - 1061356 openstack-neutron > https://bugzilla.redhat.com/show_bug.cgi?id=1061356 > neutron-dhcp-agent fails to start with the error: Package > 'openstack-neutron' isn't signed with proper key > > - 1061378 openstack-neutron > https://bugzilla.redhat.com/show_bug.cgi?id=1061378 > Neutron ML2 DB configuration fails with an error > > - 1061574 openstack-foreman-installer > https://bugzilla.redhat.com/show_bug.cgi?id=1061574 > foreman server installer script should be in PATH and have options > > - 1061613 openstack-foreman-installer > https://bugzilla.redhat.com/show_bug.cgi?id=1061613 > [RDO][Openstack-foreman-installer]: puppet throws errors during > registration of clients against foreman-server. (during: > foreman_client.sh) > > - 1061689 openstack-packstack > https://bugzilla.redhat.com/show_bug.cgi?id=1061689 > Horizon SSL is disabled by Nagios configuration via packstack > > - 1061710 openstack-packstack > https://bugzilla.redhat.com/show_bug.cgi?id=1061710 > openstack-cinder-backup service not enabled on boot > > - 1061750 openstack-neutron > https://bugzilla.redhat.com/show_bug.cgi?id=1061750 > Neutron Load Balancer VIP Creation Fails > > - 1061753 openstack-packstack > https://bugzilla.redhat.com/show_bug.cgi?id=1061753 > Create an option in packstack to increase verbosity level of > libvirt > > - 1061760 openstack-foreman-installer > https://bugzilla.redhat.com/show_bug.cgi?id=1061760 > openstack-cinder-backup service is not enabled on boot > > - 1061768 openstack-keystone > https://bugzilla.redhat.com/show_bug.cgi?id=1061768 > Keystone-all - ImportError: cannot import name deploy > > - 1061818 python-django-horizon > https://bugzilla.redhat.com/show_bug.cgi?id=1061818 > when flavor is too small the error is not clear > ======================================================================= > > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- /kashyap -------------- next part -------------- NOTES captured during RDO test day 4,5-FEB-2014 =============================================== == Collecting Neutron debug info == * Fetch the scripts to collect Neutorn debug info, and run them * $ wget https://raw.github.com/larsks/neutron-diag/master/gather-network-info https://raw.github.com/larsks/neutron-diag/master/gather-neutron-info * $ chmod +x gather-network-info gather-network-info * $ ./gather-network-info * $ ./gather-neutron-info * Repeat step-1 for all the nodes involved in your OpenStack setup. * Upload resulting tar.gz files to a location. == Dependency Gathering for EPEL7 === * Sub-etherpad: https://etherpad.openstack.org/p/nova-deps-epel7-rdo-i-td2 * == Fedora Cloud Images == * Fedora 20 * QCOW2: $ wget http://download.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2 * RAW: $ wget http://download.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.raw.xz * Cirros 0.3.1 * $ wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img * Import the images into Glance * $ glance image-create --name fedora20 --is-public true --disk-format qcow2 --container-format bare < Fedora-x86_64-20-20131211.1-sda.qcow2 == Known Issues == * ERROR : Error appeared during Puppet run: 192.168.1.13_mysql.pp * Error: Could not start Service[mysqld]: Execution of '/sbin/service mariadb start' returned 1: * On Fedora 20, latest version of mariadb-server-5.5.34-3.fc20.x86_64 does not run after installation (unless updated),it fails because /var/lib/mariadb/mariadb.log can not be accessed by mysqld_safe, as it's owned by root:root,so use previous version (-2.fc20), or chown mysql:mysql /var/lib/mariadb/mariadb.log. * https://bugzilla.redhat.com/show_bug.cgi?id=1061045 * workaround before packstack: $ yum install -y mariadb-server && chown mysql:mysql /var/log/mariadb/mariadb.log * The mysql puppet module changes the log file to /var/log/mysqld.log, but it does not create it, the service at start up does not have enough permission to create it * workaround before packstack: $ touch /var/log/mysqld.log; chown mysql:mysql /var/log/mysqld.log && chcon -u system_u -r object_r -t mysqld_log_t /var/log/mysqld.log * F20: multiple swift, neutron and nova-netowork related selinux policy violation: #1061137 == Bugs == * https://bugzilla.redhat.com/show_bug.cgi?id=1061055 -- [el6] neutron-dhcp-agent dead - fixed * https://bugzilla.redhat.com/show_bug.cgi?id=1061137 -- Wrong exception raised when trying to create an existing tenant * https://bugzilla.redhat.com/show_bug.cgi?id=1061152 -- [RDO][openstack-foreman-installer]: When attempting to deploy foreman-client on Fedora20 - It seems that puppet modules connecting mysql instead of connecting mariadb. * https://bugzilla.redhat.com/show_bug.cgi?id=1061613 -- [RDO][Openstack-foreman-installer]: puppet throws errors during registration of clients against foreman-server. (during: foreman_client.sh) * https://bugzilla.redhat.com/show_bug.cgi?id=1061349 -- neutron-dhcp-agent won't start due to a missing import of module named stevedore * https://bugzilla.redhat.com/show_bug.cgi?id=1061356 -- neutron-dhcp-agent fails to start with the error: Package 'openstack-neutron' isn't signed with proper key * https://bugzilla.redhat.com/show_bug.cgi?id=1061378 -- Neutron ML2 DB configuration fails with an error * https://bugzilla.redhat.com/show_bug.cgi?id=1061343 -- horizon errors out when adding myself to a group * https://bugzilla.redhat.com/show_bug.cgi?id=1061750 -- Neutron Load Balancer VIP Creation Fails * https://bugzilla.redhat.com/show_bug.cgi?id=1061689 -- Horizon SSL is disabled by Nagios configuration via packstack == Suggestions == * apevec:Next time I propose to skip testday at m2 - it's still too much flux upstream. m3 is more reasonable, it's feature freeze, +1+1 == Foreman == **Should work Only with RHEL6.5 !! ** ( both : Foreman-Server and Foreman-Client) 1) 'yum install openstacl-foreman-installer' : Sometimes from TLV office we're getting "no more mirrors" : Error Downloading Packages: 1:ruby193-rubygem-activesupport-3.2.8-6.el6.noarch: failure: ruby193-rubygem-activesupport-3.2.8-6.el6.noarch.rpm from foreman: [Errno 256] No more mirrors to try. foreman-1.3.2-1.el6.noarch: failure: foreman-1.3.2-1.el6.noarch.rpm from foreman: [Errno 256] No more mirrors to try. - try running "yum clean metadata" or "yum clean all" before running yum (beagles) 2) During run of foreman-server.sh : Errors during run of foreman-server.sh --> http://pastebin.com/ipaeacDU http://pastebin.com/rp2pYL2z 3) During foreman_client.sh (harmless Errors??) a)Error: /File[/var/lib/puppet/lib/puppet/type/cinder_api_paste_ini.rb]/ensure: change from absent to file failed: execution expired b)Error: Could not retrieve plugin: execution expired c)Error: /File[/var/lib/puppet/lib/puppet/provider/vcsrepo/cvs.rb]/ensure: change from absent to file failed: execution expired d)Error: /File[/var/lib/puppet/lib/puppet/type/neutron_network.rb]/ensure: change from absent to file failed: execution expired (Bz#1061613) 4) During running puppet agent -t -v (on Fedora20 ) : It seems that the puppet used by the foreman-server - attempts to connect to mysql on f20 foreman-client- machines.. (instead of connecting mariadb? ) Error: Could not prefetch database_grant provider 'mysql': Execution of '/usr/bin/mysql --defaults-file=/root/.my.cnf mysql -Be describe user' returned 1: Could not open required defaults file: /root/.my.cnf http://pastebin.com/t4jj9L9P (Bz#1061152) I've managed to deploy : foreman-server + neutron-controllr + neutron-networker + neutron-compute (All against RHEL6.5) need to investigate (during foreman_client.sh) : http://pastebin.com/xUZJJG5a From rohara at redhat.com Thu Feb 6 16:20:17 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Thu, 6 Feb 2014 10:20:17 -0600 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <20140204174420.GB19248@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> Message-ID: <20140206162016.GW19248@redhat.com> A new wiki page is available here: http://openstack.redhat.com/RabbitMQ On Tue, Feb 04, 2014 at 11:44:20AM -0600, Ryan O'Hara wrote: > On Tue, Feb 04, 2014 at 12:28:31PM -0500, John Eckersberg wrote: > > (In the spirit of "Concerning Hobbits") > > Thanks for kicking off this thread. > > > Ryan O'Hara and I have been investigating RabbitMQ as it pertains to RDO > > recently. There has been a lot of discussion on several disparate > > threads, so I wanted to try and capture it on the list for the benefit > > of everyone. > > > > Ryan has been working on getting RabbitMQ running in a multi-node HA > > configuration. I won't steal his thunder, and he can speak to it better > > than I can, so I'll defer to him on the details. > > Right now I have a 3-node RabbitMQ cluster with mirrored queues. I > also put haproxy in front of this cluster and pointed all relevant > OpenStack services at the virtual IP address. This seems to work well > so far. Details instructions coming soon. > > > As for me, I've been working on el7 support and bug squashing along the > > way. > > > > The first bug[1] causes the daemon to load incredibly slow, or outright > > fail by timing out. This is due to the SELinux policy disallowing > > name_bind on ports lower than 32768. RabbitMQ tries to name_bind to a > > port starting at 10000, and increments if it fails. So if you have > > SELinux in enforcing mode, you'll get 22768 AVC denials in the log > > before it finally starts. > > > > The second bug[2] causes the daemon to intermittently fail to start due > > to a race condition in the creation of the erlang cookie file. This > > happens only the first time the service starts. Really this is an > > Erlang bug, but there's a workaround for the RabbitMQ case. > > > > I've submitted patches for both issues. Until those get merged in, I've > > rebuilt[3] RabbitMQ for F20 which includes the fixes. > > Awesome. > > > Beyond bugs, I've also built out RabbitMQ and all the build/runtime > > dependencies for el7. I have a yum repo[4] on my fedorapeople page > > containing all the bits. This is all the stuff that is presently > > missing from EPEL7. In time, I would hope the maintainers build all > > this stuff, but for now it'll work for testing. You will also need the > > EPEL 7 Beta repository[5] enabled. > > > > As a side note, I built everything using mock with a local override repo > > on my workstation. I've not used copr before but it seems relevant to > > this sort of thing, so if it's any benefit I'll look to rebuilt the el7 > > stack there for easier consumption. > > > > Hopefully this helps get the discussion into one place, and provide a > > baseline for further investigation by everyone interested in RabbitMQ. > > John and I will be putting all of this in a wiki page on the RDO > website in the very near future. I'll send email to the list when it > is ready to be reviewed. > > Ryan > > > John. > > > > --- > > [1] Is really two bugzillas, but the same bug: > > [1a] https://bugzilla.redhat.com/show_bug.cgi?id=998682 > > [1b] https://bugzilla.redhat.com/show_bug.cgi?id=1032595 > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1059913 > > [3] http://jeckersb.fedorapeople.org/rabbitmq-server-3.1.5-3.fc20.noarch.rpm > > [4] http://jeckersb.fedorapeople.org/rabbitmq-el7/ > > [5] http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/ > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From pmyers at redhat.com Fri Feb 7 01:00:37 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 06 Feb 2014 20:00:37 -0500 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <20140206162016.GW19248@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> Message-ID: <52F43035.1050100@redhat.com> On 02/06/2014 11:20 AM, Ryan O'Hara wrote: > A new wiki page is available here: > > http://openstack.redhat.com/RabbitMQ Excellent stuff guys :) So... next step would be working with the Packstack folks to get RabbitMQ as an option for basic Packstack configs (not HA stuff, just vanilla RabbitMQ) And then beyond that, I know there is some work going on around HA deployments via Foreman. Probably we should make the HA RabbitMQ configuration that Ryan outlined an option for Foreman deployments? Perry From morazi at redhat.com Fri Feb 7 01:05:30 2014 From: morazi at redhat.com (Mike Orazi) Date: Thu, 06 Feb 2014 20:05:30 -0500 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <52F43035.1050100@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> Message-ID: <52F4315A.7000408@redhat.com> On 02/06/2014 08:00 PM, Perry Myers wrote: > On 02/06/2014 11:20 AM, Ryan O'Hara wrote: >> A new wiki page is available here: >> >> http://openstack.redhat.com/RabbitMQ > > Excellent stuff guys :) > > So... next step would be working with the Packstack folks to get > RabbitMQ as an option for basic Packstack configs (not HA stuff, just > vanilla RabbitMQ) > > And then beyond that, I know there is some work going on around HA > deployments via Foreman. Probably we should make the HA RabbitMQ > configuration that Ryan outlined an option for Foreman deployments? > > Perry > +1, great work guys! On the packstack work, I'd like to ask if there is a significant amount of non-puppet work needed to make the rabbitmq configuration happen let's have some discussion around on the list about it. We have hit a few cases where we things deviated between packstack and foreman simply because of a fair bit of work happening outside puppet. If we have to do some things outside of puppet both teams need to make sure to allocate time to do the work (or find a reasonable way to share non-puppet code). Thanks, Mike From pmyers at redhat.com Fri Feb 7 01:08:20 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 06 Feb 2014 20:08:20 -0500 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <52F4315A.7000408@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> <52F4315A.7000408@redhat.com> Message-ID: <52F43204.4020400@redhat.com> On 02/06/2014 08:05 PM, Mike Orazi wrote: > On 02/06/2014 08:00 PM, Perry Myers wrote: >> On 02/06/2014 11:20 AM, Ryan O'Hara wrote: >>> A new wiki page is available here: >>> >>> http://openstack.redhat.com/RabbitMQ >> >> Excellent stuff guys :) >> >> So... next step would be working with the Packstack folks to get >> RabbitMQ as an option for basic Packstack configs (not HA stuff, just >> vanilla RabbitMQ) >> >> And then beyond that, I know there is some work going on around HA >> deployments via Foreman. Probably we should make the HA RabbitMQ >> configuration that Ryan outlined an option for Foreman deployments? >> >> Perry >> > > > +1, great work guys! > > On the packstack work, I'd like to ask if there is a significant amount > of non-puppet work needed to make the rabbitmq configuration happen > let's have some discussion around on the list about it. > > We have hit a few cases where we things deviated between packstack and > foreman simply because of a fair bit of work happening outside puppet. > If we have to do some things outside of puppet both teams need to make > sure to allocate time to do the work (or find a reasonable way to share > non-puppet code). Good point. Packstack should have very little logic in it. Nominally we should treat it as a way to apply puppet modules to machines. Anything we implement directly in Packstack python code would need to be re-implemented in Foreman or other places, so we should try to minimize that duplication From jeckersb at redhat.com Fri Feb 7 02:17:22 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Thu, 06 Feb 2014 21:17:22 -0500 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <52F4315A.7000408@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> <52F4315A.7000408@redhat.com> Message-ID: <877g97ll8t.fsf@redhat.com> Mike Orazi writes: > On the packstack work, I'd like to ask if there is a significant amount > of non-puppet work needed to make the rabbitmq configuration happen > let's have some discussion around on the list about it. I suspect this will be pretty standard, puppet-only updates. Install package, enable service, maybe twiddle a config or two. John. From rohara at redhat.com Fri Feb 7 05:10:41 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Thu, 6 Feb 2014 23:10:41 -0600 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <52F43035.1050100@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> Message-ID: <20140207051040.GA11378@redhat.com> On Thu, Feb 06, 2014 at 08:00:37PM -0500, Perry Myers wrote: > On 02/06/2014 11:20 AM, Ryan O'Hara wrote: > > A new wiki page is available here: > > > > http://openstack.redhat.com/RabbitMQ > > Excellent stuff guys :) > > So... next step would be working with the Packstack folks to get > RabbitMQ as an option for basic Packstack configs (not HA stuff, just > vanilla RabbitMQ) I think the basic RabbitMQ (non-HA) option in Packstack should be very straightforward. Have packstack give the option of qpid or rabbitmq (qpid default). If RabbitMQ is chosen, allow for the host/address and port to be specified. > And then beyond that, I know there is some work going on around HA > deployments via Foreman. Probably we should make the HA RabbitMQ > configuration that Ryan outlined an option for Foreman deployments? Maybe. I was forwarded a message earlier this week about mirrored queues being a bit unstable. I'd like to see more testing with RDO and RabbitMQ w/ mirrored queues, but I also understand that having the means to deploy it makes it easier to test. Ryan From rohara at redhat.com Fri Feb 7 05:15:25 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Thu, 6 Feb 2014 23:15:25 -0600 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <52F4315A.7000408@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> <52F4315A.7000408@redhat.com> Message-ID: <20140207051525.GB11378@redhat.com> On Thu, Feb 06, 2014 at 08:05:30PM -0500, Mike Orazi wrote: > On 02/06/2014 08:00 PM, Perry Myers wrote: > > On 02/06/2014 11:20 AM, Ryan O'Hara wrote: > >> A new wiki page is available here: > >> > >> http://openstack.redhat.com/RabbitMQ > > > > Excellent stuff guys :) > > > > So... next step would be working with the Packstack folks to get > > RabbitMQ as an option for basic Packstack configs (not HA stuff, just > > vanilla RabbitMQ) > > > > And then beyond that, I know there is some work going on around HA > > deployments via Foreman. Probably we should make the HA RabbitMQ > > configuration that Ryan outlined an option for Foreman deployments? > > > > Perry > > > > > +1, great work guys! > > On the packstack work, I'd like to ask if there is a significant amount > of non-puppet work needed to make the rabbitmq configuration happen > let's have some discussion around on the list about it. The packstack guys can give an official answer, but I don't think it would require much logic. I think John's original write-up showed that you can drop RabbitMQ in and simply replace the rpc_driver parameter in a handful of config files (glance being a slight deviation). > We have hit a few cases where we things deviated between packstack and > foreman simply because of a fair bit of work happening outside puppet. > If we have to do some things outside of puppet both teams need to make > sure to allocate time to do the work (or find a reasonable way to share > non-puppet code). The other bit of outstanding work is to fix a few bugs with rabbitmq-server. John has a good handle on this, and I think we're just waiting to get packages built. We can start by getting the rabbitmq puppet modules into packstack and foreman. Ryan From aortega at redhat.com Fri Feb 7 10:49:03 2014 From: aortega at redhat.com (Alvaro Lopez Ortega) Date: Fri, 7 Feb 2014 11:49:03 +0100 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <20140207051525.GB11378@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> <52F4315A.7000408@redhat.com> <20140207051525.GB11378@redhat.com> Message-ID: <631192AF-C773-43E3-BB70-1718BA95E1F2@redhat.com> On 07 Feb 2014, at 06:15, Ryan O'Hara wrote: > On Thu, Feb 06, 2014 at 08:05:30PM -0500, Mike Orazi wrote: >> On 02/06/2014 08:00 PM, Perry Myers wrote: >>> On 02/06/2014 11:20 AM, Ryan O'Hara wrote: >>>> A new wiki page is available here: >>>> >>>> http://openstack.redhat.com/RabbitMQ >>> >>> Excellent stuff guys :) >>> >>> So... next step would be working with the Packstack folks to get >>> RabbitMQ as an option for basic Packstack configs (not HA stuff, just >>> vanilla RabbitMQ) >>> >>> And then beyond that, I know there is some work going on around HA >>> deployments via Foreman. Probably we should make the HA RabbitMQ >>> configuration that Ryan outlined an option for Foreman deployments? >> >> +1, great work guys! >> >> On the packstack work, I'd like to ask if there is a significant amount >> of non-puppet work needed to make the rabbitmq configuration happen >> let's have some discussion around on the list about it. > > The packstack guys can give an official answer, but I don't think it > would require much logic. I think John's original write-up showed that > you can drop RabbitMQ in and simply replace the rpc_driver parameter > in a handful of config files (glance being a slight deviation). Unless we find something unexpected, it should only require some parameter handling code in Python. The configuration logic would live in the Puppet module. Best, Alvaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From shake.chen at gmail.com Fri Feb 7 14:27:43 2014 From: shake.chen at gmail.com (Shake Chen) Date: Fri, 7 Feb 2014 22:27:43 +0800 Subject: [Rdo-list] Concerning Rabbits In-Reply-To: <631192AF-C773-43E3-BB70-1718BA95E1F2@redhat.com> References: <871tzihjnk.fsf@redhat.com> <20140204174420.GB19248@redhat.com> <20140206162016.GW19248@redhat.com> <52F43035.1050100@redhat.com> <52F4315A.7000408@redhat.com> <20140207051525.GB11378@redhat.com> <631192AF-C773-43E3-BB70-1718BA95E1F2@redhat.com> Message-ID: I just want to know how long the packstack can support RabbitMQ? On Fri, Feb 7, 2014 at 6:49 PM, Alvaro Lopez Ortega wrote: > On 07 Feb 2014, at 06:15, Ryan O'Hara wrote: > > On Thu, Feb 06, 2014 at 08:05:30PM -0500, Mike Orazi wrote: > > On 02/06/2014 08:00 PM, Perry Myers wrote: > > On 02/06/2014 11:20 AM, Ryan O'Hara wrote: > > A new wiki page is available here: > > http://openstack.redhat.com/RabbitMQ > > > Excellent stuff guys :) > > So... next step would be working with the Packstack folks to get > RabbitMQ as an option for basic Packstack configs (not HA stuff, just > vanilla RabbitMQ) > > And then beyond that, I know there is some work going on around HA > deployments via Foreman. Probably we should make the HA RabbitMQ > configuration that Ryan outlined an option for Foreman deployments? > > > +1, great work guys! > > On the packstack work, I'd like to ask if there is a significant amount > of non-puppet work needed to make the rabbitmq configuration happen > let's have some discussion around on the list about it. > > > The packstack guys can give an official answer, but I don't think it > would require much logic. I think John's original write-up showed that > you can drop RabbitMQ in and simply replace the rpc_driver parameter > in a handful of config files (glance being a slight deviation). > > > Unless we find something unexpected, it should only require some parameter > handling code in Python. The configuration logic would live in the Puppet > module. > > Best, > Alvaro > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sun Feb 9 07:45:00 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 9 Feb 2014 02:45:00 -0500 Subject: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS In-Reply-To: <52E9DFF1.7020301@redhat.com> References: <52E9DFF1.7020301@redhat.com> Message-ID: In previous successful attempt reproduce your schema on real F20 boxes I was able to start neutron-server with [root at dfw02 neutron(keystone_admin)]$ cat plugin.ini | grep -v ^# | grep -v ^$ [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = 192.168.1.127 [agent] [securitygroup] [DATABASE] sql_connection = mysql://root:password at dfw02.localdomain/ovs_neutron [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver and finally [root at dfw02 ~]# ovs-vsctl show 7d78d536-3612-416e-bce6-24605088212f Bridge br-int Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-ex Port "p37p1" Interface "p37p1" Port br-ex Interface br-ex type: internal Bridge br-tun Port br-tun Interface br-tun type: internal Port "gre-2" Interface "gre-2" type: gre options: {in_key=flow, local_ip="192.168.1.127", out_key=flow, remote_ip="192.168.1.137"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} ovs_version: "2.0.0" Compute node instances were able to obtain floating and internal ip addresses I am running this TwoNode Cluster in mean time with all `yum updates` after 01/23/2014 In new attempt on fresh F20 instance Neutron-server may be started only with [DATABASE] sql_connection = mysql://root:password at localhost/ovs_neutron Block like :- Port "gre-2" Interface "gre-2" type: gre options: {in_key=flow, local_ip="192.168.1.147", out_key=flow, remote_ip="192.168.1.157"} doesn't appear in `ovs-vsctl show` output . Nothing works on Compute all Configs are the the same as in first attempt. The error from mysql, which I get "Access denied fror 'root"@'new_hostname' new_hostname as before is in /etc/hosts 192.168.1.147 new_hostname.localdomain new_hostname and in /etc/hostname new_hostname.localdomain For me it looks like bug for neutron-server to be bind to 127.0.0.1 ,actually, connected with MariaDB database. I did 2 attempts to reproduce it from scratch building Controller and every time Neutron-server start up limitation came up. Kashyap, my question to you :- Am I correct in my conclusions regarding Neutron-Server mysql credentials affecting network abilities of Neutron or libvirtd daemon is a real carrier for metadata and schema would work only on non-default libvirt's network for virtual machines ? Then working real cluster is a kind of miracle. It's under testing on daily basis. Thanks. Boris. PS. All snapshots done on first Cluster (successfully working in meantime with all updates accepted from yum) may be viewed here :- http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html > Date: Thu, 30 Jan 2014 10:45:29 +0530 > From: kchamart at redhat.com > To: rdo-list at redhat.com > Subject: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS > > Heya, > > Just in case if it's useful for someone, here are my working Neutron > configuration files (and iptables rules) for a two node set-up based on > IceHouse-M2 on Fedora-20, > > - Controller node: Nova, Keystone (token-based auth), Cinder, > Glance, Neutron (using Open vSwitch plugin and GRE tunneling). > > - Compute node: Nova (nova-compute), Neutron (openvswitch-agent) > > > Controller node Neutron configurations > ====================================== > > 1. neutron.conf > --------------- > > $ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# > [DEFAULT] > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > rpc_backend = neutron.openstack.common.rpc.impl_qpid > control_exchange = neutron > qpid_hostname = 192.169.142.49 > auth_strategy = keystone > allow_overlapping_ips = True > dhcp_lease_duration = 120 > allow_bulk = True > qpid_port = 5672 > qpid_heartbeat = 60 > qpid_protocol = tcp > qpid_tcp_nodelay = True > qpid_reconnect_limit=0 > qpid_reconnect_interval_max=0 > qpid_reconnect_timeout=0 > qpid_reconnect=True > qpid_reconnect_interval_min=0 > qpid_reconnect_interval=0 > debug = False > verbose = False > [quotas] > [agent] > [keystone_authtoken] > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > auth_host = 192.169.142.49 > auth_port = 35357 > auth_protocol = http > auth_uri=http://192.169.142.49:5000/ > [database] > [service_providers] > [AGENT] > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > 2. (OVS) plugin.ini > ------------------- > > $ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^# > [ovs] > tenant_network_type = gre > tunnel_id_ranges = 1:1000 > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = 192.169.142.49 > [agent] > [securitygroup] > [DATABASE] > sql_connection = mysql://neutron:fedora at node1-controller/ovs_neutron > sql_max_retries=10 > reconnect_interval=2 > sql_idle_timeout=3600 > [SECURITYGROUP] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > > 3. dhcp_agent.ini > ----------------- > > $ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver > handle_internal_only_routers = TRUE > external_network_bridge = br-ex > use_namespaces = True > dnsmasq_config_file = /etc/neutron/dnsmasq.conf > > 4. l3_agent.ini > --------------- > > $ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver > handle_internal_only_routers = TRUE > external_network_bridge = br-ex > use_namespaces = True > dnsmasq_config_file = /etc/neutron/dnsmasq.conf > > 5. dnsmasq.conf > --------------- > > This logs dnsmasq output is to a file, instead of journalctl): > > $ cat /etc/neutron/dnsmasq.conf | grep -v ^$ | grep -v ^# > log-facility = /var/log/neutron/dnsmasq.log > log-dhcp > > 6. api-paste.ini > ---------------- > > $ cat /etc/neutron/api-paste.ini | grep -v ^$ | grep -v ^# > [composite:neutron] > use = egg:Paste#urlmap > /: neutronversions > /v2.0: neutronapi_v2_0 > [composite:neutronapi_v2_0] > use = call:neutron.auth:pipeline_factory > noauth = extensions neutronapiapp_v2_0 > keystone = authtoken keystonecontext extensions neutronapiapp_v2_0 > [filter:keystonecontext] > paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory > [filter:authtoken] > paste.filter_factory = > keystoneclient.middleware.auth_token:filter_factory > admin_user=neutron > auth_port=35357 > admin_password=fedora > auth_protocol=http > auth_uri=http://192.169.142.49:5000/ > admin_tenant_name=services > auth_host = 192.169.142.49 > [filter:extensions] > paste.filter_factory = > neutron.api.extensions:plugin_aware_extension_middleware_factory > [app:neutronversions] > paste.app_factory = neutron.api.versions:Versions.factory > [app:neutronapiapp_v2_0] > paste.app_factory = neutron.api.v2.router:APIRouter.factory > > 7. metadata_agent.ini > --------------------- > > $ cat /etc/neutron/metadata_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > auth_url = http://192.169.142.49:35357/v2.0/ > auth_region = regionOne > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > nova_metadata_ip = 192.168.142.49 > nova_metadata_port = 8775 > metadata_proxy_shared_secret = fedora > > > Compute node Neutron configurations > =================================== > > 1. neutron.conf > --------------- > > $ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# > [DEFAULT] > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > rpc_backend = neutron.openstack.common.rpc.impl_qpid > qpid_hostname = 192.169.142.49 > auth_strategy = keystone > allow_overlapping_ips = True > qpid_port = 5672 > debug = True > verbose = True > [quotas] > [agent] > [keystone_authtoken] > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > auth_host = 192.169.142.49 > [database] > [service_providers] > [AGENT] > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > 2. (OVS) plugin.ini > ------------------- > > $ cat plugin.ini | grep -v ^$ | grep -v ^# > [ovs] > tenant_network_type = gre > tunnel_id_ranges = 1:1000 > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = 192.169.142.57 > [DATABASE] > sql_connection = mysql://neutron:fedora at node1-controller/ovs_neutron > [SECURITYGROUP] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > [agent] > [securitygroup] > > 3. metadata_agent.ini > --------------------- > > $ cat metadata_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > auth_url = http://localhost:5000/v2.0 > auth_region = RegionOne > admin_tenant_name = %SERVICE_TENANT_NAME% > admin_user = %SERVICE_USER% > admin_password = %SERVICE_PASSWORD% > > > iptables rules on both Controller and Compute nodes > =================================================== > > iptables on Controller node > --------------------------- > > $ cat /etc/sysconfig/iptables > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 > cinder incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 > horizon incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 > glance incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 5000,35357 -m comment > --comment "001 keystone incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 > mariadb incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 > novncproxy incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment > "001 novaapi incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 > neutron incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 > qpid incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 > metadata incoming" -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A INPUT -p gre -j ACCEPT > -A OUTPUT -p gre -j ACCEPT > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > iptables on Compute node > ------------------------ > > $ cat /etc/sysconfig/iptables > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -p gre -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A OUTPUT -p gre -j ACCEPT > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > > > [1] Also here -- > http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt > > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sun Feb 9 12:20:58 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sun, 9 Feb 2014 07:20:58 -0500 Subject: [Rdo-list] RE(2): Neutron configuration files for a two node Neutron+GRE+OVS In-Reply-To: <52E9DFF1.7020301@redhat.com> References: <52E9DFF1.7020301@redhat.com> Message-ID: To add to first RE: On compute node 192.168.1.137 :- [root at dfw01 neutron]# cat plugin.ini [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = 192.168.1.137 [agent] [securitygroup] [DATABASE] sql_connection = mysql://root:password at 192.168.1.127/ovs_neutron [SECURITYGROUP] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver It cannot work any longer affecting neutron-openvswitch service on compute node I am using native F20 repos It was never possible for me to start neutron-server with "mysql://neutron:fedora at hostname/ovs_neutron" . Starting as root causes some trouble, but it's acceptable. Thanks. Boris. > Date: Thu, 30 Jan 2014 10:45:29 +0530 > From: kchamart at redhat.com > To: rdo-list at redhat.com > Subject: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS > > Heya, > > Just in case if it's useful for someone, here are my working Neutron > configuration files (and iptables rules) for a two node set-up based on > IceHouse-M2 on Fedora-20, > > - Controller node: Nova, Keystone (token-based auth), Cinder, > Glance, Neutron (using Open vSwitch plugin and GRE tunneling). > > - Compute node: Nova (nova-compute), Neutron (openvswitch-agent) > > > Controller node Neutron configurations > ====================================== > > 1. neutron.conf > --------------- > > $ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# > [DEFAULT] > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > rpc_backend = neutron.openstack.common.rpc.impl_qpid > control_exchange = neutron > qpid_hostname = 192.169.142.49 > auth_strategy = keystone > allow_overlapping_ips = True > dhcp_lease_duration = 120 > allow_bulk = True > qpid_port = 5672 > qpid_heartbeat = 60 > qpid_protocol = tcp > qpid_tcp_nodelay = True > qpid_reconnect_limit=0 > qpid_reconnect_interval_max=0 > qpid_reconnect_timeout=0 > qpid_reconnect=True > qpid_reconnect_interval_min=0 > qpid_reconnect_interval=0 > debug = False > verbose = False > [quotas] > [agent] > [keystone_authtoken] > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > auth_host = 192.169.142.49 > auth_port = 35357 > auth_protocol = http > auth_uri=http://192.169.142.49:5000/ > [database] > [service_providers] > [AGENT] > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > 2. (OVS) plugin.ini > ------------------- > > $ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^# > [ovs] > tenant_network_type = gre > tunnel_id_ranges = 1:1000 > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = 192.169.142.49 > [agent] > [securitygroup] > [DATABASE] > sql_connection = mysql://neutron:fedora at node1-controller/ovs_neutron > sql_max_retries=10 > reconnect_interval=2 > sql_idle_timeout=3600 > [SECURITYGROUP] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > > 3. dhcp_agent.ini > ----------------- > > $ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver > handle_internal_only_routers = TRUE > external_network_bridge = br-ex > use_namespaces = True > dnsmasq_config_file = /etc/neutron/dnsmasq.conf > > 4. l3_agent.ini > --------------- > > $ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver > handle_internal_only_routers = TRUE > external_network_bridge = br-ex > use_namespaces = True > dnsmasq_config_file = /etc/neutron/dnsmasq.conf > > 5. dnsmasq.conf > --------------- > > This logs dnsmasq output is to a file, instead of journalctl): > > $ cat /etc/neutron/dnsmasq.conf | grep -v ^$ | grep -v ^# > log-facility = /var/log/neutron/dnsmasq.log > log-dhcp > > 6. api-paste.ini > ---------------- > > $ cat /etc/neutron/api-paste.ini | grep -v ^$ | grep -v ^# > [composite:neutron] > use = egg:Paste#urlmap > /: neutronversions > /v2.0: neutronapi_v2_0 > [composite:neutronapi_v2_0] > use = call:neutron.auth:pipeline_factory > noauth = extensions neutronapiapp_v2_0 > keystone = authtoken keystonecontext extensions neutronapiapp_v2_0 > [filter:keystonecontext] > paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory > [filter:authtoken] > paste.filter_factory = > keystoneclient.middleware.auth_token:filter_factory > admin_user=neutron > auth_port=35357 > admin_password=fedora > auth_protocol=http > auth_uri=http://192.169.142.49:5000/ > admin_tenant_name=services > auth_host = 192.169.142.49 > [filter:extensions] > paste.filter_factory = > neutron.api.extensions:plugin_aware_extension_middleware_factory > [app:neutronversions] > paste.app_factory = neutron.api.versions:Versions.factory > [app:neutronapiapp_v2_0] > paste.app_factory = neutron.api.v2.router:APIRouter.factory > > 7. metadata_agent.ini > --------------------- > > $ cat /etc/neutron/metadata_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > auth_url = http://192.169.142.49:35357/v2.0/ > auth_region = regionOne > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > nova_metadata_ip = 192.168.142.49 > nova_metadata_port = 8775 > metadata_proxy_shared_secret = fedora > > > Compute node Neutron configurations > =================================== > > 1. neutron.conf > --------------- > > $ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# > [DEFAULT] > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > rpc_backend = neutron.openstack.common.rpc.impl_qpid > qpid_hostname = 192.169.142.49 > auth_strategy = keystone > allow_overlapping_ips = True > qpid_port = 5672 > debug = True > verbose = True > [quotas] > [agent] > [keystone_authtoken] > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > auth_host = 192.169.142.49 > [database] > [service_providers] > [AGENT] > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > 2. (OVS) plugin.ini > ------------------- > > $ cat plugin.ini | grep -v ^$ | grep -v ^# > [ovs] > tenant_network_type = gre > tunnel_id_ranges = 1:1000 > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = 192.169.142.57 > [DATABASE] > sql_connection = mysql://neutron:fedora at node1-controller/ovs_neutron > [SECURITYGROUP] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > [agent] > [securitygroup] > > 3. metadata_agent.ini > --------------------- > > $ cat metadata_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > auth_url = http://localhost:5000/v2.0 > auth_region = RegionOne > admin_tenant_name = %SERVICE_TENANT_NAME% > admin_user = %SERVICE_USER% > admin_password = %SERVICE_PASSWORD% > > > iptables rules on both Controller and Compute nodes > =================================================== > > iptables on Controller node > --------------------------- > > $ cat /etc/sysconfig/iptables > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 > cinder incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 > horizon incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 > glance incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 5000,35357 -m comment > --comment "001 keystone incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 > mariadb incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 > novncproxy incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment > "001 novaapi incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 > neutron incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 > qpid incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 > metadata incoming" -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A INPUT -p gre -j ACCEPT > -A OUTPUT -p gre -j ACCEPT > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > iptables on Compute node > ------------------------ > > $ cat /etc/sysconfig/iptables > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -p gre -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A OUTPUT -p gre -j ACCEPT > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > > > [1] Also here -- > http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt > > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Feb 10 05:20:40 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 10 Feb 2014 10:50:40 +0530 Subject: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS In-Reply-To: References: <52E9DFF1.7020301@redhat.com> Message-ID: <20140210052040.GC10226@tesla.redhat.com> (Please convince your mail client to wrap long lines, it's very difficult to read your emails.) On Sun, Feb 09, 2014 at 02:45:00AM -0500, Boris Derzhavets wrote: [. . .] > In new attempt on fresh F20 instance Neutron-server may be started only with > > [DATABASE] > sql_connection = mysql://root:password at localhost/ovs_neutron > > Block like :- > > Port "gre-2" > Interface "gre-2" > type: gre > options: {in_key=flow, local_ip="192.168.1.147", out_key=flow, remote_ip="192.168.1.157"} > > doesn't appear in `ovs-vsctl show` output . Nothing works on Compute > all Configs are the the same as in first attempt. > > The error from mysql, which I get "Access denied fror > 'root"@'new_hostname' new_hostname as before is in /etc/hosts > > > 192.168.1.147 new_hostname.localdomain new_hostname > > and in /etc/hostname > new_hostname.localdomain > > For me it looks like bug for neutron-server to be bind to 127.0.0.1 > ,actually, connected with MariaDB database. It could possibly be. Please write a clear bug with full details and proper reproducer steps. > > > I did 2 attempts to reproduce it from scratch building Controller and > every time Neutron-server start up limitation came up. > Kashyap, my question to you :- > > Am I correct in my conclusions regarding Neutron-Server mysql > credentials affecting network abilities of Neutron or libvirtd daemon > is a real carrier for metadata and schema would work only on > non-default libvirt's network for virtual machines ? > I don't follow your question. Please rephrase or if you're convinced, please write bug with as much clear details as possible https://wiki.openstack.org/wiki/BugFilingRecommendations > > Then working real cluster is a kind of miracle. It's under testing on > daily basis. Thanks for testing. > > Thanks. > Boris. > > PS. All snapshots done on first Cluster (successfully working in > meantime with all updates accepted from yum) may be viewed here :- > > http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html > -- /kashyap From bderzhavets at hotmail.com Mon Feb 10 13:18:37 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 10 Feb 2014 08:18:37 -0500 Subject: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS In-Reply-To: <20140210052040.GC10226@tesla.redhat.com> References: <52E9DFF1.7020301@redhat.com>, , <20140210052040.GC10226@tesla.redhat.com> Message-ID: I had to update manually table bellow for root & nova passwords at FQDN host :- [root at dfw01 ~(keystone_admin)]$ mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 35 Server version: 5.5.34-MariaDB MariaDB Server Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user; +----------+-------------------+-------------------------------------------+ | User | Host | Password | +----------+-------------------+-------------------------------------------+ | root | localhost | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 | | root | dfw01.localdomain | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 | it's critical | root | 127.0.0.1 | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 | | root | ::1 | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 | | keystone | localhost | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A | | keystone | % | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A | | glance | localhost | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 | | glance | % | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 | | cinder | localhost | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | cinder | % | *028F8298C041368BA08A280AA8D1EF895CB68D5C | | neutron | localhost | *4DF421833991170108648F1103CD74FCB66BBE9E | | neutron | % | *03A31004769F9E4F94ECEEA61AA28D9649084839 | | nova | localhost | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | nova | % | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | | nova | dfw01.localdomain | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 | it's critical +----------+-------------------+-------------------------------------------+ 15 rows in set (0.00 sec) Otherwise , nothing is going to work , just "allinone" testing. When it's done , your schema goes ahead on F20 Two Node Real Cluster. I am going to file a bug regarding this updates , because I believe it should be done behind the scene. Updated and inserted rows are responsible for remote connection to controller for nova-compute and neutron-openswitch-agent services. Thanks Boris. > Date: Mon, 10 Feb 2014 10:50:40 +0530 > From: kchamart at redhat.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > Subject: Re: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS > > (Please convince your mail client to wrap long lines, it's very > difficult to read your emails.) > > On Sun, Feb 09, 2014 at 02:45:00AM -0500, Boris Derzhavets wrote: > > [. . .] > > > In new attempt on fresh F20 instance Neutron-server may be started only with > > > > [DATABASE] > > sql_connection = mysql://root:password at localhost/ovs_neutron > > > > Block like :- > > > > Port "gre-2" > > Interface "gre-2" > > type: gre > > options: {in_key=flow, local_ip="192.168.1.147", out_key=flow, remote_ip="192.168.1.157"} > > > > doesn't appear in `ovs-vsctl show` output . Nothing works on Compute > > all Configs are the the same as in first attempt. > > > > > The error from mysql, which I get "Access denied fror > > 'root"@'new_hostname' new_hostname as before is in /etc/hosts > > > > > > 192.168.1.147 new_hostname.localdomain new_hostname > > > > and in /etc/hostname > > new_hostname.localdomain > > > > For me it looks like bug for neutron-server to be bind to 127.0.0.1 > > ,actually, connected with MariaDB database. > > It could possibly be. Please write a clear bug with full details and > proper reproducer steps. > > > > > > > > I did 2 attempts to reproduce it from scratch building Controller and > > every time Neutron-server start up limitation came up. > > > > Kashyap, my question to you :- > > > > Am I correct in my conclusions regarding Neutron-Server mysql > > credentials affecting network abilities of Neutron or libvirtd daemon > > is a real carrier for metadata and schema would work only on > > non-default libvirt's network for virtual machines ? > > > > I don't follow your question. Please rephrase or if you're convinced, > please write bug with as much clear details as possible > > https://wiki.openstack.org/wiki/BugFilingRecommendations > > > > > Then working real cluster is a kind of miracle. It's under testing on > > daily basis. > > Thanks for testing. > > > > Thanks. > > Boris. > > > > PS. All snapshots done on first Cluster (successfully working in > > meantime with all updates accepted from yum) may be viewed here :- > > > > http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html > > > > -- > /kashyap -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Feb 10 14:57:13 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 10 Feb 2014 09:57:13 -0500 Subject: [Rdo-list] [Rdo-newsletter] February RDO Community Newsletter Message-ID: <52F8E8C9.2060806@redhat.com> My apologies. I sent this last week, and addressed it to the wrong place. So the first few paragraphs are somewhat out of date. Thanks for being part of the RDO community! Upcoming Conferences This week I'm traveling in Belgium for FOSDEM and several other nearby events. It's a busy few days. FOSDEM - https://fosdem.org/2014/ - was in Brussels, Belgium, and around 5000 attendees were expected to attended, including a good contingent of the RDO and OpenStack community. There was a lot of great OpenStack content, which you can see listed at http://fnords.wordpress.com/2014/01/09/openstack-fosdem-14/ Immediately following FOSDEM, Config Management Camp is currently ongoing in the nearby town of Ghent, February 3 and 4. That's where I am right now, as I write this, You can read more about this event at http://cfgmgmtcamp.eu/ Config Management Camp doesn't have content directly about OpenStack, but of course Packstack is built on top of Puppet, and you can also deploy RDO with The Foreman, and there's a number of talks about both of those projects here. Following Config Management Camp, Red Hat is sponsoring Infrastructure.next, a one-day event - February 5 - around evolving tools and projects for managing large-scale IT infrastructure. From storage to configuration management to virtualization infrastructure, all the way to Infrastructure-as-a-Service (IaaS). You can see the schedule of talks at http://lanyrd.com/2014/infranext/ Later in February, RDO will have a booth at SCALE - Southern California Linux Expo. That's February 21-23, in Los Angeles. You can find out more about SCALE at https://www.socallinuxexpo.org/scale12x On the 21st - the first day of SCALE - Red Hat will be sponsoring another Infrastructure.Next, where I'll be reprising my Ceilometer talk, and there will again be a lot of great content for people thinking about moving their services to a cloud infrastructure. Other Upcoming Events With the OpenStack Icehouse milestone 2 out on January 23rd, we'll be conducting another RDO test day on February 4th and 5th to hammer out any problems with this release. Find out more, and sign up to participate, at http://openstack.redhat.com/RDO_test_day_Icehouse_milestone_2 Derek Higgins will be presenting "Deploying OpenStack with Triple-O and Tuskar" at the OpenStack France Meet-up in Paris on February 11th: http://www.meetup.com/OpenStack-France/events/161704432/ On Thusday, February 27th, Lars Kellogg-Stedman will be leading a Google Hangout in which he will be doing a walk-through of a multinode deployment with packstack.See http://openstack.redhat.com/Hangouts#Upcoming_Hangouts for details of this event, or follow us on Twitter (@rdocommunity) for a reminder closer to the event. CentOS Cloud SIG In January, Red Hat announced new involvement in the CentOS project. You can read more about that announcement at http://www.redhat.com/about/news/press-archive/2014/1/red-hat-and-centos-join-forces On the heels of that announcement, OpenStack and other cloud infrastructure projects have started talking about what this means for us, and how we can create variants of CentOS that make it easier to deploy these infrastructures. This conversation is happening on the centos-devel mailing list ( http://lists.centos.org/mailman/listinfo/centos-devel ) so please join that list if you want to participate in the effort. You can also watch the "Office Hours" hangout at http://www.youtube.com/watch?v=VKKYY_5SOWw in which members of the various cloud infrastructure projects discuss the way forward in creating a Cloud SIG (Special Interest Group) to produce these variants and liveCD distributions. IRC Meetings We've started several regular IRC meetings, to move what we're doing more into the public view. The weekly community team meeting, which had been happening on the phone, has moved to IRC. If you'd like to see what the community team is up to, come to the #rdo channel on Freenode at 9am Eastern USA time, each Tuesday. We also post a meeting summary to the rdo-list mailing list ( http://www.redhat.com/mailman/listinfo/rdo-list ) for anyone who can't make it to the meeting. We've also started a bug triage meeting on IRC, in which we attempt to at least assign all of the open bugs that nobody's working on yet. The first of these was on Wednesday January 15th, and upcoming ones will be announced on the rdo-list mailing list until we figure out what the right cadence is for those meetings. Community events like these are posted on the RDO Community Google calendar. If you use Google Calendar, you can paste the following into the entry box under "Other calendars" where it says "Add a friend's calendar" - 6m0up994frfg2td6dpmubtn31s at group.calendar.google.com If you use some other calendaring software, you can subscribe using the ICS address: http://www.google.com/calendar/ical/6m0up994frfg2td6dpmubtn31s%40group.calendar.google.com/public/basic.ics OpenStack Foundation elections The OpenStack foundation recently held elections for the board of directors. We'd like to extend a special congratulations to Mark McLoughlin who was elected as an Individual Director. You can see the full board at http://lists.openstack.org/pipermail/foundation/2014-January/001616.html One of the cool things about OpenStack is that although the directors represent various companies, they first represent OpenStack, and make decisions that benefit the project and the foundation first, and their companies come after. So we're proud that someone from the RDO community is on the board, and we're thrilled that he considers the well-being of the project to be the first priority. In Closing Thanks again for being part of the RDO community. We've hardly had a moment to catch our breath this year, and it looks like the pace is going to continue. So bring your friends and colleagues along to share the work, and to help take this exciting technology to the next level. Once again, you can keep up with what's going by following us on Twitter - @rdocommunity - or the rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list -- Rich Bowen, for the RDO community rbowen at redhat.com http://openstack.redhat.com/ @rdocommunity _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From kchamart at redhat.com Tue Feb 11 10:07:04 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 11 Feb 2014 15:37:04 +0530 Subject: [Rdo-list] [Upcoming] RDO Bug triage day: 19FEB-2014, UTC 14:00 Message-ID: <20140211100704.GC32306@tesla.redhat.com> As we've roughly agreed during our last triage meeting to do have a recurring triage once every 3rd wednesday of the month, next one is lined up for 19-FEB-2014. Just a reminder: As we find some spare cycles, we can continue to triage ahead of time and proceed with general bugzilla workflow and not have to wait for the triage day. (I notice from my bugzilla spam that some folks are already doing this.) Will send a reminder note as a reply to this thread on the day before of next triage day. Some convenience information below. Bugs ---- - List of un-triaged bugs (NEW state) -- http://goo.gl/NqW2LN - List of all ASSIGNED bugs (with and without Keyword 'Triaged') -- http://goo.gl/oFY9vX - List of all ON_QA bugs -- http://goo.gl/CZX92r The above info is also here: http://openstack.redhat.com/RDO-BugTriage Timezones --------- - If your local time is set correctly, running the below command will convert UTC timezone to your timezone $ date -d '2014-02-19 14:00 UTC' - General UTC Howto: https://fedoraproject.org/wiki/Infrastructure/UTCHowto Zodbot ------ - Useful commands while co-ordinating an IRC meeting: http://fedoraproject.org/wiki/Zodbot#Meeting_Functions If I missed to note something, please add to this thread. -- /kashyap From mburns at redhat.com Tue Feb 11 15:01:42 2014 From: mburns at redhat.com (Mike Burns) Date: Tue, 11 Feb 2014 10:01:42 -0500 Subject: [Rdo-list] Weekly RDO meeting minutes -- 2014-02-11 Message-ID: <52FA3B56.2040508@redhat.com> Minutes: http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.html Minutes (text): http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.txt Log: http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.log.html ======================== #rdo: RDO weekly meeting ======================== Meeting started by mburned at 14:02:15 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/rdo_community_manager_sync (rbowen, 14:02:17) * agenda (mburned, 14:02:29) * LINK: https://etherpad.openstack.org/p/rdo_community_manager_sync (mburned, 14:02:36) * Test Day (mburned, 14:04:58) * test day was held Feb 4-5 (mburned, 14:05:16) * LINK: https://www.redhat.com/archives/rdo-list/2014-February/msg00034.html (kashyap, 14:06:27) * fair number of issues reported that are being triaged and handled (mburned, 14:08:05) * LINK: https://www.redhat.com/archives/rdo-list/2014-February/msg00033.html (mburned, 14:08:12) * LINK: https://www.redhat.com/archives/rdo-list/2014-February/msg00034.html (mburned, 14:08:15) * ACTION: rbowen will get the Fedora test day wiki page set up for the M3 test day. (rbowen, 14:08:53) * IDEA: for future releases, go to M1 and M3 for test day and skip M2 (too many test days, too quick cadence) (mburned, 14:19:39) * debate on whether to do test day for final (mburned, 14:19:51) * also should investigate fedora test day process more (mburned, 14:20:04) * Icehouse M3 is 6-March (mburned, 14:20:21) * tentatively set test day for March 18-19 (mburned, 14:20:49) * Hangout (mburned, 14:21:42) * LINK: http://openstack.redhat.com/Hangouts (mburned, 14:21:54) * next scheduled for Feb 27 with larsks (mburned, 14:22:04) * featuring a multinode setup demo (mburned, 14:22:20) * rbowen will hopefully be scheduling a few more in the coming days (mburned, 14:22:50) * if you have a proposal or would like to volunteer, please see this etherpad (mburned, 14:23:20) * LINK: https://etherpad.openstack.org/p/rdo_hangouts (mburned, 14:23:24) * or contact rbowen (mburned, 14:23:34) * Newsletter (mburned, 14:23:52) * February newsletter sent yesterday (was sent earlier to incorrect list) (mburned, 14:24:12) * topics for the march newsletter can be proposed here: https://etherpad.openstack.org/p/rdo_mar_2014_newsletter (mburned, 14:24:36) * Conference review (mburned, 14:25:19) * FOSDEM Infrastructure.Next and Config Mgmt camp were held last week (mburned, 14:25:38) * rbowen interviewed Ohad about foreman, should be published soon (mburned, 14:26:02) * LINK: https://www.redhat.com/archives/rdo-list/2014-February/msg00049.html (mburned, 14:27:39) * the newsletter ^^ (mburned, 14:27:48) * good content on many topics (mburned, 14:28:41) * iaas/cloud room was packed the entire time (mburned, 14:28:53) * rbowen spoke at Infrastrucutre.next about ceilometer (mburned, 14:30:19) * generated a lot of comments on what ceilometer is doing wrong (mburned, 14:30:38) * rbowen hoping to bring them into openstack community to help improve (mburned, 14:31:08) * ACTION: rbowen to summarize and post to rdo-list soon (mburned, 14:31:44) * printed RDO materials unfortunately did not show up, plans underway to fix this in the future (mburned, 14:35:55) * Upcoming events (mburned, 14:36:13) * SCALE will be held next weeking in LA, RDO will have a table (mburned, 14:36:58) * also another Infrastructure.Next event (rbowen will present a re-written ceilometer talk there) (mburned, 14:37:41) * if you will be in attendance and are will to help with the RDO table, please contact rbowen (mburned, 14:38:01) * ACTION: rbowen to update http://openstack.redhat.com/Events (mburned, 14:39:45) * ACTION: rbowen and mburned to review list of upcoming "cloud" events, and talk next week about where we might want to have a presence. (rbowen, 14:41:10) * CentOS Cloud SIG (mburned, 14:41:58) * not much action in the last week or 2 on this (mburned, 14:42:09) * mburned meeting with kbsingh this week to start really driving this forward (mburned, 14:42:29) * mburned has been asked to a driving role in the SIG (mburned, 14:42:49) * should have more updates next week (mburned, 14:43:05) * Cloud Instance SIG is also up and running and mburned will be participating there as well (mburned, 14:43:39) * Cloud Instance is about generating centos images for use in a cloud infrastructure (mburned, 14:44:15) * Bug Triage (mburned, 14:44:22) * next scheduled one is Feb 19 (mburned, 14:44:35) * Forum questions (mburned, 14:45:29) * currently 35 unanswered posts (mburned, 14:46:08) * need to start driving that down (mburned, 14:46:14) * test day structure (revisited) (mburned, 14:49:17) * we currently loosely use the Fedora test day structure (mburned, 14:49:32) * LINK: https://fedoraproject.org/wiki/Test_Day:2013-10-08_Virtualization (mburned, 14:49:39) * nothing radically new (mburned, 14:49:45) Meeting ended at 15:00:31 UTC. Action Items ------------ * rbowen will get the Fedora test day wiki page set up for the M3 test day. * rbowen to summarize and post to rdo-list soon * rbowen to update http://openstack.redhat.com/Events * rbowen and mburned to review list of upcoming "cloud" events, and talk next week about where we might want to have a presence. Action Items, by person ----------------------- * mburned * rbowen and mburned to review list of upcoming "cloud" events, and talk next week about where we might want to have a presence. * rbowen * rbowen will get the Fedora test day wiki page set up for the M3 test day. * rbowen to summarize and post to rdo-list soon * rbowen to update http://openstack.redhat.com/Events * rbowen and mburned to review list of upcoming "cloud" events, and talk next week about where we might want to have a presence. * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * mburned (109) * rbowen (91) * kashyap (55) * morazi (8) * eggmaster (6) * larsks (3) * zodbot (3) * pmyers (2) * dneary (1) * zsun (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From rbowen at redhat.com Tue Feb 11 15:02:15 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 11 Feb 2014 10:02:15 -0500 Subject: [Rdo-list] Weekly community IRC meeting Message-ID: <52FA3B77.7020600@redhat.com> The weekly community IRC meeting was this morning on #rdo on Freenode. It takes place every Tuesday at 9am Eastern US time, if you want to participate next time. Input is welcome from everyone - this isn't intended to be a closed meeting. Minutes: http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.html Minutes (text): http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.txt Log: http://meetbot.fedoraproject.org/rdo/2014-02-11/rdo.2014-02-11-14.02.log.html -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From lhh at redhat.com Tue Feb 11 18:30:56 2014 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 11 Feb 2014 13:30:56 -0500 Subject: [Rdo-list] Creating rpms for Tempest In-Reply-To: <52F00224.2080503@redhat.com> References: <52EFBAC3.7070303@redhat.com> <52EFED9E.4090707@redhat.com> <52F00224.2080503@redhat.com> Message-ID: <52FA6C60.8060305@redhat.com> On 02/03/2014 03:55 PM, David Kranz wrote: > On 02/03/2014 02:27 PM, Lon Hohberger wrote: >> On 02/03/2014 10:50 AM, David Kranz wrote: >>> There has been a lot of interest in running tempest against real RDO >>> clusters. Having an rpm would make that a lot easier. There are several >>> issues peculiar to tempest that need to be resolved. >>> >>> 1. The way tempest configures itself and does test discovery depends >>> heavily on the tests being run from a directory containing the tests. >>> 2. Unlike OpenStack python client libraries, tempest changes from >>> release to release in incompatible ways, so you need "havana tempest" to >>> test a havana cluster and an "icehouse tempest" to test icehouse. >>> >>> The tempest group has little interest in changing either of these >>> behaviors. Additionally, it would be desirable if a tempest user could >>> install tempest rpms to test different RDO versions on the same machine. >>> Here is a proposal for how this could work and what the user experience >>> would be. >>> >>> Dealing with these tempest issues suggests that the the tempest code >>> should be copied to /var/lib/tempest/{4.0,5.0, etc.} and the user should >>> configure a separate directory for each OpenStack cluster to be tested. >>> Each directory needs to contain: >>> >>> .testr.conf >>> an etc directory containing the tempest.conf and logging.conf files >>> a symlink to the tempest test modules for the appropriate version >>> a copy of the test run scripts that are in the tools directory of >>> tempest >>> >>> To help the user create such directories, there should be a global >>> executable "configure-tempest-directory" that takes an optional version. >>> If multiple versions are present in /var/lib/tempest and no version is >>> specified then the user will be asked which version to configure. >> Would it make sense to make the package name itself to have it? >> >> This way, you could say install tempest for grizzly and update tempest >> for havana - e.g.: >> >> yum install tempest-grizzly >> yum update -y tempest-havana >> >> ... and the RPMs would not obsolete each other. If we use RPM >> versioning, "yum update" will blow away testing environments of "older" >> tempest versions. >> >> We could perhaps do it using sub-rpms - which would allow us to add and >> remove tempest suites as we move along: >> >> * ship no files except perhaps license in 'tempest' itself >> and stand-up environment bits >> * use subpackages for >> tempest-grizzly/tempest-havana/tempest-icehouse/... >> * get a special exception to *not* make tempest-* child >> RPMs require a fully-versioned tempest base package all >> the time just in case you wanted to have older tempest >> for a given release >> e.g. tempest-2.0.0 >> tempest-grizzly-1.0.0 >> tempest-havana-2.0.0 >> >> ... could all be installed, and if done right, you could >> 'yum update -y tempest-grizzly' to 3.0.0 without breaking >> the tempest-havana-2.0.0 package. >> >> Just a thought. >> >>> User experience: >>> >>> 1. Install tempest rpm: yum install tempest-4.0 >> The above would mean: >> >> yum install -y tempest-havana >> yum install -y tempest-grizzly >> >> etc. >> >> >>> 2. Run configure-tempest-directory >>> 3. Make changes to tempest.conf to match the cluster being tested (and >>> possibly logging.conf and .testr.conf as well) >>> 4. Run tempest with desired test selection using >>> tools/pretty_tox_serial.sh or tools/ pretty_tox_serial >>> >>> Does any one have any comments/suggestions about this? >>> >> -- Lon >> > Lon, I was not clear enough but I was proposing something very similar > to your suggestion, just without subpackages which I don't understand > well. Now that I understand the rules a little better and that the base > package names should not have things that look like version numbers, I > propose: > > package-names: tempest-havana.{version stuff}, tempest-icehouse.{version > stuff} > > And a tempest-base package on which they all depend that creates > "/var/lib/tempest" and the configure-tempest-directory script. > > -David Sorry for not responding on-list (instead of just in-person) - that sounds great. -- Lon From bderzhavets at hotmail.com Wed Feb 12 08:48:32 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 12 Feb 2014 03:48:32 -0500 Subject: [Rdo-list] RE(3): Neutron configuration files for a two node Neutron+GRE+OVS In-Reply-To: <52E9DFF1.7020301@redhat.com> References: <52E9DFF1.7020301@redhat.com> Message-ID: Please, be advised :- https://bugzilla.redhat.com/show_bug.cgi?id=1064176 Thanks Boris. > Date: Thu, 30 Jan 2014 10:45:29 +0530 > From: kchamart at redhat.com > To: rdo-list at redhat.com > Subject: [Rdo-list] Neutron configuration files for a two node Neutron+GRE+OVS > > Heya, > > Just in case if it's useful for someone, here are my working Neutron > configuration files (and iptables rules) for a two node set-up based on > IceHouse-M2 on Fedora-20, > > - Controller node: Nova, Keystone (token-based auth), Cinder, > Glance, Neutron (using Open vSwitch plugin and GRE tunneling). > > - Compute node: Nova (nova-compute), Neutron (openvswitch-agent) > > > Controller node Neutron configurations > ====================================== > > 1. neutron.conf > --------------- > > $ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# > [DEFAULT] > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > rpc_backend = neutron.openstack.common.rpc.impl_qpid > control_exchange = neutron > qpid_hostname = 192.169.142.49 > auth_strategy = keystone > allow_overlapping_ips = True > dhcp_lease_duration = 120 > allow_bulk = True > qpid_port = 5672 > qpid_heartbeat = 60 > qpid_protocol = tcp > qpid_tcp_nodelay = True > qpid_reconnect_limit=0 > qpid_reconnect_interval_max=0 > qpid_reconnect_timeout=0 > qpid_reconnect=True > qpid_reconnect_interval_min=0 > qpid_reconnect_interval=0 > debug = False > verbose = False > [quotas] > [agent] > [keystone_authtoken] > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > auth_host = 192.169.142.49 > auth_port = 35357 > auth_protocol = http > auth_uri=http://192.169.142.49:5000/ > [database] > [service_providers] > [AGENT] > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > 2. (OVS) plugin.ini > ------------------- > > $ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^# > [ovs] > tenant_network_type = gre > tunnel_id_ranges = 1:1000 > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = 192.169.142.49 > [agent] > [securitygroup] > [DATABASE] > sql_connection = mysql://neutron:fedora at node1-controller/ovs_neutron > sql_max_retries=10 > reconnect_interval=2 > sql_idle_timeout=3600 > [SECURITYGROUP] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > > 3. dhcp_agent.ini > ----------------- > > $ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver > handle_internal_only_routers = TRUE > external_network_bridge = br-ex > use_namespaces = True > dnsmasq_config_file = /etc/neutron/dnsmasq.conf > > 4. l3_agent.ini > --------------- > > $ cat /etc/neutron/dhcp_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver > handle_internal_only_routers = TRUE > external_network_bridge = br-ex > use_namespaces = True > dnsmasq_config_file = /etc/neutron/dnsmasq.conf > > 5. dnsmasq.conf > --------------- > > This logs dnsmasq output is to a file, instead of journalctl): > > $ cat /etc/neutron/dnsmasq.conf | grep -v ^$ | grep -v ^# > log-facility = /var/log/neutron/dnsmasq.log > log-dhcp > > 6. api-paste.ini > ---------------- > > $ cat /etc/neutron/api-paste.ini | grep -v ^$ | grep -v ^# > [composite:neutron] > use = egg:Paste#urlmap > /: neutronversions > /v2.0: neutronapi_v2_0 > [composite:neutronapi_v2_0] > use = call:neutron.auth:pipeline_factory > noauth = extensions neutronapiapp_v2_0 > keystone = authtoken keystonecontext extensions neutronapiapp_v2_0 > [filter:keystonecontext] > paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory > [filter:authtoken] > paste.filter_factory = > keystoneclient.middleware.auth_token:filter_factory > admin_user=neutron > auth_port=35357 > admin_password=fedora > auth_protocol=http > auth_uri=http://192.169.142.49:5000/ > admin_tenant_name=services > auth_host = 192.169.142.49 > [filter:extensions] > paste.filter_factory = > neutron.api.extensions:plugin_aware_extension_middleware_factory > [app:neutronversions] > paste.app_factory = neutron.api.versions:Versions.factory > [app:neutronapiapp_v2_0] > paste.app_factory = neutron.api.v2.router:APIRouter.factory > > 7. metadata_agent.ini > --------------------- > > $ cat /etc/neutron/metadata_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > auth_url = http://192.169.142.49:35357/v2.0/ > auth_region = regionOne > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > nova_metadata_ip = 192.168.142.49 > nova_metadata_port = 8775 > metadata_proxy_shared_secret = fedora > > > Compute node Neutron configurations > =================================== > > 1. neutron.conf > --------------- > > $ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# > [DEFAULT] > core_plugin > =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 > rpc_backend = neutron.openstack.common.rpc.impl_qpid > qpid_hostname = 192.169.142.49 > auth_strategy = keystone > allow_overlapping_ips = True > qpid_port = 5672 > debug = True > verbose = True > [quotas] > [agent] > [keystone_authtoken] > admin_tenant_name = services > admin_user = neutron > admin_password = fedora > auth_host = 192.169.142.49 > [database] > [service_providers] > [AGENT] > root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf > > 2. (OVS) plugin.ini > ------------------- > > $ cat plugin.ini | grep -v ^$ | grep -v ^# > [ovs] > tenant_network_type = gre > tunnel_id_ranges = 1:1000 > enable_tunneling = True > integration_bridge = br-int > tunnel_bridge = br-tun > local_ip = 192.169.142.57 > [DATABASE] > sql_connection = mysql://neutron:fedora at node1-controller/ovs_neutron > [SECURITYGROUP] > firewall_driver = > neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver > [agent] > [securitygroup] > > 3. metadata_agent.ini > --------------------- > > $ cat metadata_agent.ini | grep -v ^$ | grep -v ^# > [DEFAULT] > auth_url = http://localhost:5000/v2.0 > auth_region = RegionOne > admin_tenant_name = %SERVICE_TENANT_NAME% > admin_user = %SERVICE_USER% > admin_password = %SERVICE_PASSWORD% > > > iptables rules on both Controller and Compute nodes > =================================================== > > iptables on Controller node > --------------------------- > > $ cat /etc/sysconfig/iptables > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 > cinder incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 > horizon incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 > glance incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 5000,35357 -m comment > --comment "001 keystone incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 > mariadb incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 > novncproxy incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment > "001 novaapi incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 > neutron incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 > qpid incoming" -j ACCEPT > -A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 > metadata incoming" -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A INPUT -p gre -j ACCEPT > -A OUTPUT -p gre -j ACCEPT > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > iptables on Compute node > ------------------------ > > $ cat /etc/sysconfig/iptables > *filter > :INPUT ACCEPT [0:0] > :FORWARD ACCEPT [0:0] > :OUTPUT ACCEPT [0:0] > -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT > -A INPUT -p icmp -j ACCEPT > -A INPUT -i lo -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT > -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT > -A INPUT -p gre -j ACCEPT > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A OUTPUT -p gre -j ACCEPT > -A FORWARD -j REJECT --reject-with icmp-host-prohibited > COMMIT > > > > [1] Also here -- > http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt > > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Feb 12 16:15:48 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 12 Feb 2014 11:15:48 -0500 Subject: [Rdo-list] OpenStack speakers wanted for Ohio LinuxFest Message-ID: <52FB9E34.70706@redhat.com> A friend of mine is on the committee for Ohio LinuxFest - http://ohiolinux.org/ - October 24th and 25th - and although the CFP isn't open just yet, he's asked me to poke around and see if anyone around here might be interested in speaking to that crowd. It would probably be introductory kind of content, since it's a general interest event, rather than a deep OpenStack event. So, if you're in that general area (it's in Columbus, Ohio), or will be in late October, I'd be glad to make introductions. It's a fun conference, and it's a great town. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From dneary at redhat.com Thu Feb 13 09:49:04 2014 From: dneary at redhat.com (Dave Neary) Date: Thu, 13 Feb 2014 10:49:04 +0100 Subject: [Rdo-list] Logging in with Google ID on RDO broken? Message-ID: <52FC9510.3020506@redhat.com> Hi, I just tried to log into the wiki with my Google ID (as I have done in the past), and got the following error: Provider is required. UniqueID is required. The connection data has not been verified. Did something change in the configuration of the connection to Google recently? Has our API key expired or something? Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From dneary at redhat.com Thu Feb 13 13:27:39 2014 From: dneary at redhat.com (Dave Neary) Date: Thu, 13 Feb 2014 14:27:39 +0100 Subject: [Rdo-list] Logging in with Google ID on RDO broken? In-Reply-To: <52FC9510.3020506@redhat.com> References: <52FC9510.3020506@redhat.com> Message-ID: <52FCC84B.4050300@redhat.com> Seems I'm not the only one seeing this: http://vanillaforums.org/discussion/25963/google-sign-in-question http://vanillaforums.org/discussion/25984/problems-with-google-login http://vanillaforums.org/discussion/26036/google-openid-not-working There is a patch on master pointed to in the first one. Cheers, Dave. On 02/13/2014 10:49 AM, Dave Neary wrote: > Hi, > > I just tried to log into the wiki with my Google ID (as I have done in > the past), and got the following error: > > > > Provider is required. > UniqueID is required. > The connection data has not been verified. > > Did something change in the configuration of the connection to Google > recently? Has our API key expired or something? > > Thanks, > Dave. > -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From lars at redhat.com Thu Feb 13 18:18:14 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 13 Feb 2014 13:18:14 -0500 Subject: [Rdo-list] A pair of bugzilla searches Message-ID: <20140213181814.GD19483@redhat.com> This is what I'm using for a list of untriaged bugs: - https://bugzilla.redhat.com/buglist.cgi?cmdtype=runnamed&list_id=2213308&namedcmd=RDO%20Untriaged This *excludes* bugs flagged NEEDINFO and both Tracking and SecurityTracking bugs. - https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&classification=Community&f1=keywords&f2=flagtypes.name&f3=assigned_to&known_name=RDO%20Untriaged&list_id=2219319&o1=nowords&o2=substring&o3=equals&product=RDO&query_based_on=RDO%20Untriaged&query_format=advanced&v1=Triaged%2CTracking%2CSecurityTracking&v2=needinfo&v3=rhos-maint%40redhat.com This is *only* bugs flagged NEEDINFO. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Thu Feb 13 19:12:06 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 13 Feb 2014 14:12:06 -0500 Subject: [Rdo-list] Logging in with Google ID on RDO broken? In-Reply-To: <52FCC84B.4050300@redhat.com> References: <52FC9510.3020506@redhat.com> <52FCC84B.4050300@redhat.com> Message-ID: <52FD1906.60504@redhat.com> Thanks. I'll try this out. On 02/13/2014 08:27 AM, Dave Neary wrote: > Seems I'm not the only one seeing this: > > http://vanillaforums.org/discussion/25963/google-sign-in-question > http://vanillaforums.org/discussion/25984/problems-with-google-login > http://vanillaforums.org/discussion/26036/google-openid-not-working > > > There is a patch on master pointed to in the first one. > > Cheers, > Dave. > > On 02/13/2014 10:49 AM, Dave Neary wrote: >> Hi, >> >> I just tried to log into the wiki with my Google ID (as I have done in >> the past), and got the following error: >> >> >> >> Provider is required. >> UniqueID is required. >> The connection data has not been verified. >> >> Did something change in the configuration of the connection to Google >> recently? Has our API key expired or something? >> >> Thanks, >> Dave. >> -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Mon Feb 17 15:50:32 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 17 Feb 2014 15:50:32 +0000 Subject: [Rdo-list] [RDO] Feb 27 Hangout, multinode deployment with packstack Message-ID: <00000144408aa8cd-9f17f972-9e47-48f7-b61f-e190415622d6-000000@email.amazonses.com> rbowen started a discussion. Feb 27 Hangout, multinode deployment with packstack --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/966/feb-27-hangout-multinode-deployment-with-packstack Have a great day! From ak at cloudssky.com Mon Feb 17 22:42:21 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 17 Feb 2014 23:42:21 +0100 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? Message-ID: Hello together, Why someone shall use KVM, Xen or other hypervisors instead Docker / LXC Containers on OpenStack? Thanks! Arash -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattdm at mattdm.org Mon Feb 17 22:55:40 2014 From: mattdm at mattdm.org (Matthew Miller) Date: Mon, 17 Feb 2014 17:55:40 -0500 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: References: Message-ID: <20140217225540.GA21361@disco.bu.edu> On Mon, Feb 17, 2014 at 11:42:21PM +0100, Arash Kaffamanesh wrote: > Why someone shall use KVM, Xen or other hypervisors instead Docker / LXC > Containers on OpenStack? Right now, Docker / LXC provide almost no security. When we have SELinux support, it'll be better, but you're still depending on a shared kernel. Virtualization provides a much higher level of isolation. The shared kernel is also limiting in other ways; you are dependent on the host kernel to have all of the features you need. And of course if you want a non-Linux system, that's not possible. Also, I don't think there's currently a good approach for live migration with containers. -- Matthew Miller mattdm at mattdm.org From lars at redhat.com Mon Feb 17 22:56:20 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 17 Feb 2014 17:56:20 -0500 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: References: Message-ID: <20140217225620.GA12964@redhat.com> On Mon, Feb 17, 2014 at 11:42:21PM +0100, Arash Kaffamanesh wrote: > Why someone shall use KVM, Xen or other hypervisors instead Docker / LXC > Containers on OpenStack? There are also a number of situations in which a container based solution may not offer sufficient flexibility. For example: - A container based solution cannot run a kernel other than the one in use on the physical host. This may not be compatible with the operating system your clients want to run. - A container based solution cannot run a completely different operating system. People often want to run Windows instances in the cloud, and occasionally other non-Linux options like FreeBSD. Additionally, the container driver for OpenStack are not yet as mature as the hypervisor drivers, and may lack the features or stability to make them an attractive alternative. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ak at cloudssky.com Mon Feb 17 23:21:13 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Tue, 18 Feb 2014 00:21:13 +0100 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: <20140217225620.GA12964@redhat.com> References: <20140217225620.GA12964@redhat.com> Message-ID: Many thanks for your great answers! So we have to wait for security, stability and live migration support to become an attractive alternative :-) Thanks again! On Mon, Feb 17, 2014 at 11:56 PM, Lars Kellogg-Stedman wrote: > On Mon, Feb 17, 2014 at 11:42:21PM +0100, Arash Kaffamanesh wrote: > > Why someone shall use KVM, Xen or other hypervisors instead Docker / LXC > > Containers on OpenStack? > > There are also a number of situations in which a container based > solution may not offer sufficient flexibility. For example: > > - A container based solution cannot run a kernel other than the one in > use on the physical host. This may not be compatible with the > operating system your clients want to run. > > - A container based solution cannot run a completely different > operating system. People often want to run Windows instances in the > cloud, and occasionally other non-Linux options like FreeBSD. > > Additionally, the container driver for OpenStack are not yet as mature > as the hypervisor drivers, and may lack the features or stability to > make them an attractive alternative. > > -- > Lars Kellogg-Stedman | larsks @ irc > Cloud Engineering / OpenStack | " " @ twitter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shake.chen at gmail.com Tue Feb 18 05:04:29 2014 From: shake.chen at gmail.com (Shake Chen) Date: Tue, 18 Feb 2014 13:04:29 +0800 Subject: [Rdo-list] keystone problem in mutil node deplay Message-ID: Hi Now I try to deplay Openstack roles in mutil node Centos 6.5, havana GRE. I try mysql and keystone in separate node? I meet the error [ ERROR ] ERROR : Error appeared during Puppet run: 172.18.1.15_keystone.pp Error: /Stage[main]/Keystone/Exec[keystone-manage db_sync]: Failed to call refresh: keystone-manage db_sync returned 1 instead of one of [0] I check the log, show ^[[mNotice: /Stage[main]/Keystone/Keystone_config[DEFAULT/verbose]/ensure: created^[[0m ^[[mNotice: /Stage[main]/Keystone/Exec[keystone-manage db_sync]/returns: 2014-02-18 13:17:34.335 2941 CRITICAL keystone [-] ( OperationalError) (2003, "Can't connect to MySQL server on '172.18.1.13' (113)") None None^[[0m ^[[1;31mError: /Stage[main]/Keystone/Exec[keystone-manage db_sync]: Failed to call refresh: keystone-manage db_sync returned 1 instead of one of [0]^[[0m ^[[1;31mError: /Stage[main]/Keystone/Exec[keystone-manage db_sync]: keystone-manage db_sync returned 1 instead of one of [0]^ [[0m ^[[mNotice: /Stage[main]/Keystone/Exec[keystone-manage pki_setup]: Triggered 'refresh' from 14 events^[[0m ^[[mNotice: /Stage[main]/Keystone/Service[keystone]/ensure: ensure changed 'stopped' to 'running'^[[0m ^[[1;31mError: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keysto ne --endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 1: An unexpected error prevented the server from fulfilling yo ur request. (OperationalError) (2003, "Can't connect to MySQL server on '172.18.1.13' (113)") None None (HTTP 500) ^[[0m -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Tue Feb 18 12:13:07 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 18 Feb 2014 12:13:07 +0000 Subject: [Rdo-list] [package announce] Updated foreman installer Message-ID: <53034E53.3000803@redhat.com> RDO Havana and Icehouse provide the foreman openstack installer for EL6 derivatives, and this has been updated to version 1.0.4 which includes: - BZ #1054181 - Set OS description consistently, install LSB. - BZ #1052408 - HA Mysql manifest: allow creation of neutron db user. - BZ #1055852 - HTTP 500 Error using Neutron metadata agent. - BZ #1056892 - Handle interface names containing ".". - BZ #1049633 - Foreman should support VXLAN. - BZ #1017281 - Add support for ML2 Core Plugin. - BZ #1056055 - Create cinder-volumes VG backed by a loopback file. - BZ #1062664 - Configure qpid_hostname for Controller host groups. - BZ #1062670 - Add tuned configuration for compute nodes. - BZ #1056383 - Foreman Controller's swift proxy no longer runs. - BZ #1055207 - Add localhost and ip access for Horizon UI. - BZ #1063514 - cinder_gluster_servers default value. - BZ #998599 - Add options for SSL support using files or FreeIPA. - BZ #1054498 - Fix double port 80 directives in apache. From kchamart at redhat.com Tue Feb 18 12:32:43 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 18 Feb 2014 18:02:43 +0530 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: References: Message-ID: <20140218123243.GD29999@tesla.redhat.com> On Mon, Feb 17, 2014 at 11:42:21PM +0100, Arash Kaffamanesh wrote: > Hello together, > > Why someone shall use KVM, Xen or other hypervisors instead Docker / LXC > Containers on OpenStack? There's a lot of uninformed, substandard blog-posts talk making the rounds on the inter-webs. Rich Jones'(long time open source programmer working on Virtualization) explains it clearly here: http://rwmj.wordpress.com/2013/06/19/the-boring-truth-full-virtualization-and-containerization-both-have-their-place/ -- /kashyap From ak at cloudssky.com Tue Feb 18 14:28:20 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Tue, 18 Feb 2014 15:28:20 +0100 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: <20140218123243.GD29999@tesla.redhat.com> References: <20140218123243.GD29999@tesla.redhat.com> Message-ID: >From Krishnan Subramanian on his blog: "As we move from a world of virtual machines to a world of containers, Red Hat and Docker will emerge as two key players shaping the landscape" http://allthingsplatforms.com/platforms/the-importance-of-red-hat-docker-partnership/ On Tue, Feb 18, 2014 at 1:32 PM, Kashyap Chamarthy wrote: > On Mon, Feb 17, 2014 at 11:42:21PM +0100, Arash Kaffamanesh wrote: > > Hello together, > > > > Why someone shall use KVM, Xen or other hypervisors instead Docker / LXC > > Containers on OpenStack? > > There's a lot of uninformed, substandard blog-posts talk making the > rounds on the inter-webs. > > Rich Jones'(long time open source programmer working on Virtualization) > explains it clearly here: > > > http://rwmj.wordpress.com/2013/06/19/the-boring-truth-full-virtualization-and-containerization-both-have-their-place/ > > > -- > /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue Feb 18 15:57:22 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 18 Feb 2014 21:27:22 +0530 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: References: <20140218123243.GD29999@tesla.redhat.com> Message-ID: <20140218155722.GE29999@tesla.redhat.com> On Tue, Feb 18, 2014 at 03:28:20PM +0100, Arash Kaffamanesh wrote: > >From Krishnan Subramanian on his blog: > "As we move from a world of virtual machines to a world of containers, Red > Hat and Docker will emerge as two key players shaping the landscape" Honestly, don't read too much into any single blog post. > http://allthingsplatforms.com/platforms/the-importance-of-red-hat-docker-partnership/ > -- /kashyap From kchamart at redhat.com Wed Feb 19 03:50:48 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 19 Feb 2014 09:20:48 +0530 Subject: [Rdo-list] [Upcoming] RDO Bug triage day: 19FEB-2014, UTC 14:00 In-Reply-To: <20140211100704.GC32306@tesla.redhat.com> References: <20140211100704.GC32306@tesla.redhat.com> Message-ID: <20140219035048.GA27446@tesla.redhat.com> On Tue, Feb 11, 2014 at 03:37:04PM +0530, Kashyap Chamarthy wrote: [. . .] Heya, this is today today. For any ad hoc triage notes, we could use this etherpad[1] Just to note, there are about 17 bugs in NEW state at this moment. Lars (K. Stedman) brought up on IRC yesterday that it'd be nice to also go through the list of ASSIGNED bugs (32 as of writing this) to see if there's any NEEDINFO, etc is waiting on it. [1] https://etherpad.openstack.org/p/rdo-bug-triage -- /kashyap > > Some convenience information below. > > > Bugs > ---- > > - List of un-triaged bugs (NEW state) -- http://goo.gl/NqW2LN > > - List of all ASSIGNED bugs (with and without Keyword 'Triaged') -- > http://goo.gl/oFY9vX > > - List of all ON_QA bugs -- http://goo.gl/CZX92r > > The above info is also here: http://openstack.redhat.com/RDO-BugTriage > > > Timezones > --------- > > - If your local time is set correctly, running the below command will > convert UTC timezone to your timezone > > $ date -d '2014-02-19 14:00 UTC' > > - General UTC Howto: > https://fedoraproject.org/wiki/Infrastructure/UTCHowto > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From lars at redhat.com Wed Feb 19 04:09:49 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 18 Feb 2014 23:09:49 -0500 Subject: [Rdo-list] [Upcoming] RDO Bug triage day: 19FEB-2014, UTC 14:00 In-Reply-To: <20140219035048.GA27446@tesla.redhat.com> References: <20140211100704.GC32306@tesla.redhat.com> <20140219035048.GA27446@tesla.redhat.com> Message-ID: <20140219040949.GB14300@redhat.com> On Wed, Feb 19, 2014 at 09:20:48AM +0530, Kashyap Chamarthy wrote: > Lars (K. Stedman) brought up on IRC yesterday that it'd be nice to also > go through the list of ASSIGNED bugs (32 as of writing this) to see if > there's any NEEDINFO, etc is waiting on it. Technically, I was suggesting that we go through the NEEDINFO bugs and see if there are any that need some action (closing, poking, etc). -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From kchamart at redhat.com Wed Feb 19 04:35:52 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 19 Feb 2014 10:05:52 +0530 Subject: [Rdo-list] [Upcoming] RDO Bug triage day: 19FEB-2014, UTC 14:00 In-Reply-To: <20140219040949.GB14300@redhat.com> References: <20140211100704.GC32306@tesla.redhat.com> <20140219035048.GA27446@tesla.redhat.com> <20140219040949.GB14300@redhat.com> Message-ID: <20140219043552.GA1239@tesla.redhat.com> On Tue, Feb 18, 2014 at 11:09:49PM -0500, Lars Kellogg-Stedman wrote: > On Wed, Feb 19, 2014 at 09:20:48AM +0530, Kashyap Chamarthy wrote: > > Lars (K. Stedman) brought up on IRC yesterday that it'd be nice to also > > go through the list of ASSIGNED bugs (32 as of writing this) to see if > > there's any NEEDINFO, etc is waiting on it. > > Technically, I was suggesting that we go through the NEEDINFO bugs and > see if there are any that need some action (closing, poking, etc). Yep, thanks for the correction. PS: To those who aren't aware, I learnt this little trick (from Eric Harney): To get details about NEEDINFO bugs to which you are a requester or a requestee, click on: "My Requests" on you Bugzilla home page. -- /kashyap From matthias.pfuetzner at redhat.com Wed Feb 19 08:11:47 2014 From: matthias.pfuetzner at redhat.com (=?ISO-8859-1?Q?Matthias_Pf=FCtzner?=) Date: Wed, 19 Feb 2014 09:11:47 +0100 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: <20140218155722.GE29999@tesla.redhat.com> References: <20140218123243.GD29999@tesla.redhat.com> <20140218155722.GE29999@tesla.redhat.com> Message-ID: <53046743.6020408@redhat.com> On 02/18/2014 04:57 PM, Kashyap Chamarthy wrote: > On Tue, Feb 18, 2014 at 03:28:20PM +0100, Arash Kaffamanesh wrote: >> >From Krishnan Subramanian on his blog: >> "As we move from a world of virtual machines to a world of containers, Red >> Hat and Docker will emerge as two key players shaping the landscape" > Honestly, don't read too much into any single blog post. Totally agree... Experiences with Solaris Containers PROOVES, that Container Technology has very specific use cases, and typically fails due to OPERATIONAL incapabilities of the organisations... So, Docker as well as containers will not be the solution to all things "virtualization" as well as "management"... Still, Docker is an important part, but it will definitively NOT replace hypervisors (Type 1!). With Type 1 hypervisors standard operations can still be applied (on top of RHEV or VMware!), and that's the simply reason, why they've been so successful! Matthias > >> http://allthingsplatforms.com/platforms/the-importance-of-red-hat-docker-partnership/ >> > -- Red Hat GmbH Matthias Pf?tzner Solution Architect, Cloud MesseTurm 60308 Frankfurt/Main phone: +49 69 365051 031 mobile: +49 172 7724032 fax: +49 69 365051 001 email: matthias.pfuetzner at redhat.com ___________________________________________________________________________ Reg. Adresse: Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn, Handelsregister: Amtsgericht M?nchen, HRB 153243 Gesch?ftsf?hrer: Charles Cachera, Michael Cunningham, Paul Hickey, Charles Peters From pbrady at redhat.com Wed Feb 19 22:35:32 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 19 Feb 2014 22:35:32 +0000 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.2 update Message-ID: <530531B4.6090807@redhat.com> The RDO Havana repositories were updated with the latest stable 2013.2.2 update Details of the changes can be drilled down to from: https://launchpad.net/nova/havana/2013.2.2 https://launchpad.net/glance/havana/2013.2.2 https://launchpad.net/horizon/havana/2013.2.2 https://launchpad.net/keystone/havana/2013.2.2 https://launchpad.net/cinder/havana/2013.2.2 https://launchpad.net/quantum/havana/2013.2.2 https://launchpad.net/ceilometer/havana/2013.2.2 https://launchpad.net/heat/havana/2013.2.2 thanks, P?draig. From david at zeromail.us Thu Feb 20 07:43:29 2014 From: david at zeromail.us (David S.) Date: Thu, 20 Feb 2014 14:43:29 +0700 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.2 update In-Reply-To: <530531B4.6090807@redhat.com> References: <530531B4.6090807@redhat.com> Message-ID: Hi P?draig, Thank you! Best regards, David S. ------------------------------------------------ p. 087881216110 e. david at zeromail.us w. http://blog.pnyet.web.id On Thu, Feb 20, 2014 at 5:35 AM, P?draig Brady wrote: > The RDO Havana repositories were updated with the latest stable 2013.2.2 > update > Details of the changes can be drilled down to from: > > https://launchpad.net/nova/havana/2013.2.2 > https://launchpad.net/glance/havana/2013.2.2 > https://launchpad.net/horizon/havana/2013.2.2 > https://launchpad.net/keystone/havana/2013.2.2 > https://launchpad.net/cinder/havana/2013.2.2 > https://launchpad.net/quantum/havana/2013.2.2 > https://launchpad.net/ceilometer/havana/2013.2.2 > https://launchpad.net/heat/havana/2013.2.2 > > thanks, > P?draig. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Thu Feb 20 11:26:13 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 20 Feb 2014 11:26:13 +0000 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.2 update In-Reply-To: <530531B4.6090807@redhat.com> References: <530531B4.6090807@redhat.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A1B6A5@CERNXCHG42.cern.ch> Padraig, Thanks... will there be a new rdo-release RPM ? Following through the quick start documentation, 'sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm' installed Havana-7 from October 2013. Tim > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of P?draig Brady > Sent: 19 February 2014 23:36 > To: rdo-list at redhat.com > Subject: [Rdo-list] [package announce] Stable Havana 2013.2.2 update > > The RDO Havana repositories were updated with the latest stable 2013.2.2 update Details of the changes can be drilled down to > from: > > https://launchpad.net/nova/havana/2013.2.2 > https://launchpad.net/glance/havana/2013.2.2 > https://launchpad.net/horizon/havana/2013.2.2 > https://launchpad.net/keystone/havana/2013.2.2 > https://launchpad.net/cinder/havana/2013.2.2 > https://launchpad.net/quantum/havana/2013.2.2 > https://launchpad.net/ceilometer/havana/2013.2.2 > https://launchpad.net/heat/havana/2013.2.2 > > thanks, > P?draig. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From pbrady at redhat.com Thu Feb 20 15:05:53 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 20 Feb 2014 15:05:53 +0000 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.2 update In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A1B6A5@CERNXCHG42.cern.ch> References: <530531B4.6090807@redhat.com> <5D7F9996EA547448BC6C54C8C5AAF4E5D9A1B6A5@CERNXCHG42.cern.ch> Message-ID: <530619D1.6050809@redhat.com> On 02/20/2014 11:26 AM, Tim Bell wrote: > > Padraig, > > Thanks... will there be a new rdo-release RPM ? Following through the quick start documentation, 'sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm' installed Havana-7 from October 2013. That rpm only contains repository level data. I.E. only changes when repository locations or keys etc. change, which is not the case for this update. thanks, P?draig. From dennisml at conversis.de Thu Feb 20 17:24:40 2014 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Thu, 20 Feb 2014 18:24:40 +0100 Subject: [Rdo-list] Why do I need KVM, XEN, if I can use Docker / LXC? In-Reply-To: <20140218155722.GE29999@tesla.redhat.com> References: <20140218123243.GD29999@tesla.redhat.com> <20140218155722.GE29999@tesla.redhat.com> Message-ID: <53063A58.2020901@conversis.de> On 18.02.2014 16:57, Kashyap Chamarthy wrote: > On Tue, Feb 18, 2014 at 03:28:20PM +0100, Arash Kaffamanesh wrote: >> >From Krishnan Subramanian on his blog: >> "As we move from a world of virtual machines to a world of containers, Red >> Hat and Docker will emerge as two key players shaping the landscape" > > Honestly, don't read too much into any single blog post. > >> http://allthingsplatforms.com/platforms/the-importance-of-red-hat-docker-partnership/ To be honest when I finally started playing with Docker I was a bit underwhelmed. It seems to be not much more than basic tooling around the linux namespacing capabilities. The libvirt-sandbox/systemd-nspawn stuff seems to be a much better fit for this kind of thing as it is geared more at isolating processes in a usable way whereas Docker appears to aim at running a full System as a container which is really what the kvm/xen/whatever approach is better for. What the CoreOS guys are doing with systemd is also interesting: https://coreos.com/blog/cluster-level-container-orchestration/ Regards, Dennis From rohara at redhat.com Thu Feb 20 22:09:13 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Thu, 20 Feb 2014 16:09:13 -0600 Subject: [Rdo-list] RDO and MariaDB+Galera wiki page Message-ID: <20140220220913.GA18557@redhat.com> I've created a wiki page that shows how to use MariaDB+Galera with RDO Havana. You can find it here: http://openstack.redhat.com/MariaDB_Galera Comments, questions, suggestions welcome as always. I'd like to add more information about SST methods, including Percona's xtrabackup, in the near future. Ryan From ak at cloudssky.com Thu Feb 20 23:05:39 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 21 Feb 2014 00:05:39 +0100 Subject: [Rdo-list] RDO and MariaDB+Galera wiki page In-Reply-To: <20140220220913.GA18557@redhat.com> References: <20140220220913.GA18557@redhat.com> Message-ID: Awesome, big thx Ryan! I guess this should work somehow easily on a multi node installation, right? Thanks, Arash On Thu, Feb 20, 2014 at 11:09 PM, Ryan O'Hara wrote: > > I've created a wiki page that shows how to use MariaDB+Galera with RDO > Havana. You can find it here: > > http://openstack.redhat.com/MariaDB_Galera > > Comments, questions, suggestions welcome as always. I'd like to add > more information about SST methods, including Percona's xtrabackup, in > the near future. > > Ryan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Thu Feb 20 23:33:11 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 21 Feb 2014 00:33:11 +0100 Subject: [Rdo-list] Question about changing overcommitment settings Message-ID: I'd disabled the CPU ALLOC RATIO on an existing RDO multi-node environment in the answer file to 1: CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=1.0 And would like to change the value to a higher value, e.g. 8: CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=8.0 Can I change this setting directly in the DB or in some config files? Or shall I run packstack against the changed answer file again? Thanks for your help in advance! Arash -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Fri Feb 21 00:27:20 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 20 Feb 2014 19:27:20 -0500 Subject: [Rdo-list] Question about changing overcommitment settings In-Reply-To: References: Message-ID: <20140221002720.GA7366@redhat.com> On Fri, Feb 21, 2014 at 12:33:11AM +0100, Arash Kaffamanesh wrote: > Can I change this setting directly in the DB or in some config files? Or > shall I run packstack against the changed answer file again? I believe you are looking for the "cpu_allocation_ratio" in /etc/nova.nova.conf. You will need to restart nova services (specifically the scheduler, I believe) after changing this file. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ak at cloudssky.com Fri Feb 21 01:14:15 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Fri, 21 Feb 2014 02:14:15 +0100 Subject: [Rdo-list] Question about changing overcommitment settings In-Reply-To: <20140221002720.GA7366@redhat.com> References: <20140221002720.GA7366@redhat.com> Message-ID: You believe, and that is true. I searched in nova.conf, but I was somehow blind. Works like a charme! Thanks Lars! On Fri, Feb 21, 2014 at 1:27 AM, Lars Kellogg-Stedman wrote: > On Fri, Feb 21, 2014 at 12:33:11AM +0100, Arash Kaffamanesh wrote: > > Can I change this setting directly in the DB or in some config files? Or > > shall I run packstack against the changed answer file again? > > I believe you are looking for the "cpu_allocation_ratio" in > /etc/nova.nova.conf. > > You will need to restart nova services (specifically the scheduler, I > believe) after changing this file. > > -- > Lars Kellogg-Stedman | larsks @ irc > Cloud Engineering / OpenStack | " " @ twitter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raghavendra.lad at accenture.com Sun Feb 23 15:01:26 2014 From: raghavendra.lad at accenture.com (raghavendra.lad at accenture.com) Date: Sun, 23 Feb 2014 15:01:26 +0000 Subject: [Rdo-list] Openstack Havana: Metadata error Message-ID: Hi Team, The images that are spinning up for the VM are coming with Metadata error that is attached with this email. The login screen also comes up however unable to login due to this Metadata issue. Please assist. Regards, Raghavendra Lad ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. . ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- A non-text attachment was scrubbed... Name: Metadata error.jpg Type: image/jpeg Size: 143402 bytes Desc: Metadata error.jpg URL: From ak at cloudssky.com Sun Feb 23 16:40:46 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sun, 23 Feb 2014 17:40:46 +0100 Subject: [Rdo-list] Openstack Havana: Metadata error In-Reply-To: References: Message-ID: Hi Raghavendra, Which image are you using? How are you trying to log into your instance? Does your cirros image work? To fix the metadata error, I'd to adapt the route table on my CentOS image like this: ip route add 169.254.169.254/32 via 10.0.0.1 (10.0.0.1 is my gateway). To persist the route table you shall use: [root at temp ~]# vi /etc/sysconfig/network-scripts/route-eth0 GATEWAY0=10.0.0.1 NETMASK0=255.255.255.255 ADDRESS0=169.254.169.254 HTH, Kind Regards, Arash On Sun, Feb 23, 2014 at 4:01 PM, wrote: > > Hi Team, > > The images that are spinning up for the VM are coming with Metadata error > that is attached with this email. The login screen also comes up however > unable to login due to this Metadata issue. > Please assist. > > > Regards, > Raghavendra Lad > > > ________________________________ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Where allowed > by local law, electronic communications with Accenture and its affiliates, > including e-mail and instant messaging (including content), may be scanned > by our systems for the purposes of information security and assessment of > internal compliance with Accenture policy. . > > ______________________________________________________________________________________ > > www.accenture.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchalupa at redhat.com Mon Feb 24 13:54:26 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Mon, 24 Feb 2014 14:54:26 +0100 Subject: [Rdo-list] [OFI] Dynflow orchestration POC draft Message-ID: <530B4F12.2050008@redhat.com> Hello, I'm sending out a Dynflow orchestration draft for proof-of-concept to start a discussion. With regards to having it asap I would use: 1. [Dynflow][1] - workflow engine written originally for Katello. 2. [ForemanTasks][2] - Rails engine that adds Dynflow integration with Foreman. 3. [Astapor manifests][3] - top level classes configuring OpenStack HA hosts. There is [top-level puppet class][4] for each role. 4. [Puppetrun][5] - to manually trigger puppet run on Foreman host. ## Minimal POC Minimal POC would be reusing Astapor manifests. There would be dynflow added to avoid the manual steps between configuring OpenStack hosts (Controllers then Computes, etc.). It would start by triggering Dynflow action which would: 1. provision needed number of hosts in parallel. 2. configure all-in-on controller using Astapor class. - adding the class to the host. - triggering puppet run. 3. configure nova compute hosts using Astapor class. Same sub-steps. 4. configure additional hosts in right order with neutron, swift, cinder. (I'll probably start with ?-POC skipping 1. and 4.) ## Open questions - Is there a simpler way how to trigger puppet run on a given host? - From a quick look Astapor modules should work for us, needs to be verified. - What would you improve? - Do you see any compilations? [1]: https://github.com/Dynflow/dynflow/ [2]: https://github.com/inecas/foreman-tasks [3]: https://github.com/redhat-openstack/astapor/tree/master/puppet/modules/quickstack [4]: https://github.com/redhat-openstack/astapor/blob/master/bin/seeds.rb#L323-L342 [5]: http://projects.theforeman.org/projects/foreman/wiki/Puppetrun I'll also send another email describing better solution to support multiple layouts later this week. Petr From hbrock at redhat.com Mon Feb 24 14:06:41 2014 From: hbrock at redhat.com (Hugh O. Brock) Date: Mon, 24 Feb 2014 09:06:41 -0500 Subject: [Rdo-list] [OFI] Dynflow orchestration POC draft In-Reply-To: <530B4F12.2050008@redhat.com> References: <530B4F12.2050008@redhat.com> Message-ID: <20140224140641.GT8753@redhat.com> On Mon, Feb 24, 2014 at 02:54:26PM +0100, Petr Chalupa wrote: > Hello, > > I'm sending out a Dynflow orchestration draft for proof-of-concept > to start a discussion. > > With regards to having it asap I would use: > > 1. [Dynflow][1] - workflow engine written originally for Katello. > 2. [ForemanTasks][2] - Rails engine that adds Dynflow integration > with Foreman. > 3. [Astapor manifests][3] - top level classes configuring OpenStack > HA hosts. There is [top-level puppet class][4] for each role. > 4. [Puppetrun][5] - to manually trigger puppet run on Foreman host. > > ## Minimal POC > > Minimal POC would be reusing Astapor manifests. There would be > dynflow added to avoid the manual steps between configuring > OpenStack hosts (Controllers then Computes, etc.). > > It would start by triggering Dynflow action which would: > > 1. provision needed number of hosts in parallel. > 2. configure all-in-on controller using Astapor class. > - adding the class to the host. > - triggering puppet run. > 3. configure nova compute hosts using Astapor class. Same sub-steps. > 4. configure additional hosts in right order with neutron, swift, cinder. > > (I'll probably start with ?-POC skipping 1. and 4.) > > ## Open questions > > - Is there a simpler way how to trigger puppet run on a given host? > - From a quick look Astapor modules should work for us, needs to > be verified. > - What would you improve? > - Do you see any compilations? > > [1]: https://github.com/Dynflow/dynflow/ > [2]: https://github.com/inecas/foreman-tasks > [3]: https://github.com/redhat-openstack/astapor/tree/master/puppet/modules/quickstack > [4]: https://github.com/redhat-openstack/astapor/blob/master/bin/seeds.rb#L323-L342 > [5]: http://projects.theforeman.org/projects/foreman/wiki/Puppetrun > > I'll also send another email describing better solution to support > multiple layouts later this week. This looks like a great start Petr, I will let others comment on details. --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == Tuskar: Elastic Scaling for OpenStack == == http://github.com/tuskar == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From mhulan at redhat.com Tue Feb 25 08:45:50 2014 From: mhulan at redhat.com (Marek Hulan) Date: Tue, 25 Feb 2014 09:45:50 +0100 Subject: [Rdo-list] [foreman-dev] [OFI] Foreman installer draft In-Reply-To: <530B4F12.2050008@redhat.com> References: <530B4F12.2050008@redhat.com> Message-ID: <1461403.KVO7DvyNzY@edna> I suggest to use similar way to install foreman as astapor does but we could migrate it to kafo [1]. This way users don't have to edit script to configure networking and get nice UI instead. Also some parts of kafo could be extracted and used to parse astapor modules to display parameters in wizard. Kafo already parse parameter documentation, parameter type (if provided in doc) and some other useful stuff. Also it's easy to add other attributes to documentation. We'd have to use this in foreman- proxy and make it upload all such information to foreman. This can be added after first version however, meanwhile we can seed parameters and their attributes during foreman install and concentrate on wizard UI (depending on time and resources, not sure who will work on UI part). [1] https://github.com/theforeman/kafo/ On Monday 24 of February 2014 14:54:26 Petr Chalupa wrote: > Hello, > > I'm sending out a Dynflow orchestration draft for proof-of-concept to > start a discussion. > > With regards to having it asap I would use: > > 1. [Dynflow][1] - workflow engine written originally for Katello. > 2. [ForemanTasks][2] - Rails engine that adds Dynflow integration with > Foreman. > 3. [Astapor manifests][3] - top level classes configuring OpenStack HA > hosts. There is [top-level puppet class][4] for each role. > 4. [Puppetrun][5] - to manually trigger puppet run on Foreman host. > > ## Minimal POC > > Minimal POC would be reusing Astapor manifests. There would be dynflow > added to avoid the manual steps between configuring OpenStack hosts > (Controllers then Computes, etc.). > > It would start by triggering Dynflow action which would: > > 1. provision needed number of hosts in parallel. > 2. configure all-in-on controller using Astapor class. > - adding the class to the host. > - triggering puppet run. > 3. configure nova compute hosts using Astapor class. Same sub-steps. > 4. configure additional hosts in right order with neutron, swift, cinder. > > (I'll probably start with ?-POC skipping 1. and 4.) > > ## Open questions > > - Is there a simpler way how to trigger puppet run on a given host? > - From a quick look Astapor modules should work for us, needs to be > verified. > - What would you improve? > - Do you see any compilations? > > [1]: https://github.com/Dynflow/dynflow/ > [2]: https://github.com/inecas/foreman-tasks > [3]: > https://github.com/redhat-openstack/astapor/tree/master/puppet/modules/quick > stack [4]: > https://github.com/redhat-openstack/astapor/blob/master/bin/seeds.rb#L323-L3 > 42 [5]: http://projects.theforeman.org/projects/foreman/wiki/Puppetrun > > I'll also send another email describing better solution to support > multiple layouts later this week. > > Petr -- Marek From pchalupa at redhat.com Tue Feb 25 08:54:04 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Tue, 25 Feb 2014 09:54:04 +0100 Subject: [Rdo-list] [foreman-dev] [OFI] Dynflow orchestration POC draft In-Reply-To: <20140224211230.GB3827@lzapx.brq.redhat.com> References: <530B4F12.2050008@redhat.com> <20140224211230.GB3827@lzapx.brq.redhat.com> Message-ID: <530C5A2C.60702@redhat.com> On 24.02.14 22:12, Lukas Zapletal wrote: >> - Is there a simpler way how to trigger puppet run on a given host? Thanks Lukas! > > We have the following implementations for puppet run: > > # puppetrun (for puppetrun/kick, deprecated in Puppet 3) Since it's deprecated we should probably avoid it if there is another simple way. > # mcollective (uses mco puppet) This one is probably not that straightforward as other options, see https://github.com/witlessbird/foreman_mco/blob/master/SETUP.md > # puppetssh (run puppet over ssh) Seems best as a best option. Simple setup, ssh is everywhere. One disadvantage is that it only triggers the puppet run, there is now way how to wait for the puppet run to finish and see results directly. It would be a bad thing anyway, smart-proxy process would be blocked. Dynflow action can solve this by triggering the puppet run via puppetssh and then polling Foreman API for new Report. (Dynflow already supports action suspending to be able to pull effectively without blocking threads. There is also a helper module for polling itself.) see http://projects.theforeman.org/issues/3047 https://github.com/theforeman/foreman/blob/develop/lib/proxy_api/puppet.rb > # salt (uses salt puppet.run) Also complicated, needs Saltstack, see https://github.com/theforeman/smart-proxy/pull/113 > # customrun (calls a custom command with args) AFAIK customrun is used internally by puppetssh so there is probably no reason to use it directly. > > Make your choice, ssh might be a way. TL;DR I agree puppetssh seems the best. Petr From pchalupa at redhat.com Tue Feb 25 08:56:56 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Tue, 25 Feb 2014 09:56:56 +0100 Subject: [Rdo-list] [foreman-dev] [OFI] Foreman installer draft In-Reply-To: <1461403.KVO7DvyNzY@edna> References: <530B4F12.2050008@redhat.com> <1461403.KVO7DvyNzY@edna> Message-ID: <530C5AD8.7060303@redhat.com> On 25.02.14 9:45, Marek Hulan wrote: > I suggest to use similar way to install foreman as astapor does but we could > migrate it to kafo [1]. This way users don't have to edit script to configure > networking and get nice UI instead. > > Also some parts of kafo could be extracted and used to parse astapor modules > to display parameters in wizard. Kafo already parse parameter documentation, > parameter type (if provided in doc) and some other useful stuff. Also it's easy > to add other attributes to documentation. We'd have to use this in foreman- > proxy and make it upload all such information to foreman. This can be added > after first version however, meanwhile we can seed parameters and their > attributes during foreman install and concentrate on wizard UI (depending on > time and resources, not sure who will work on UI part). > > [1] https://github.com/theforeman/kafo/ +1 for using kafo to install OFI itself > > On Monday 24 of February 2014 14:54:26 Petr Chalupa wrote: >> Hello, >> >> I'm sending out a Dynflow orchestration draft for proof-of-concept to >> start a discussion. >> >> With regards to having it asap I would use: >> >> 1. [Dynflow][1] - workflow engine written originally for Katello. >> 2. [ForemanTasks][2] - Rails engine that adds Dynflow integration with >> Foreman. >> 3. [Astapor manifests][3] - top level classes configuring OpenStack HA >> hosts. There is [top-level puppet class][4] for each role. >> 4. [Puppetrun][5] - to manually trigger puppet run on Foreman host. >> >> ## Minimal POC >> >> Minimal POC would be reusing Astapor manifests. There would be dynflow >> added to avoid the manual steps between configuring OpenStack hosts >> (Controllers then Computes, etc.). >> >> It would start by triggering Dynflow action which would: >> >> 1. provision needed number of hosts in parallel. >> 2. configure all-in-on controller using Astapor class. >> - adding the class to the host. >> - triggering puppet run. >> 3. configure nova compute hosts using Astapor class. Same sub-steps. >> 4. configure additional hosts in right order with neutron, swift, cinder. >> >> (I'll probably start with ?-POC skipping 1. and 4.) >> >> ## Open questions >> >> - Is there a simpler way how to trigger puppet run on a given host? >> - From a quick look Astapor modules should work for us, needs to be >> verified. >> - What would you improve? >> - Do you see any compilations? >> >> [1]: https://github.com/Dynflow/dynflow/ >> [2]: https://github.com/inecas/foreman-tasks >> [3]: >> https://github.com/redhat-openstack/astapor/tree/master/puppet/modules/quick >> stack [4]: >> https://github.com/redhat-openstack/astapor/blob/master/bin/seeds.rb#L323-L3 >> 42 [5]: http://projects.theforeman.org/projects/foreman/wiki/Puppetrun >> >> I'll also send another email describing better solution to support >> multiple layouts later this week. >> >> Petr > From greg.sutcliffe at gmail.com Tue Feb 25 09:25:30 2014 From: greg.sutcliffe at gmail.com (Greg Sutcliffe) Date: Tue, 25 Feb 2014 09:25:30 +0000 Subject: [Rdo-list] [foreman-dev] [OFI] Foreman installer draft In-Reply-To: <1461403.KVO7DvyNzY@edna> References: <530B4F12.2050008@redhat.com> <1461403.KVO7DvyNzY@edna> Message-ID: On 25 February 2014 08:45, Marek Hulan wrote: > We'd have to use this in foreman- > proxy and make it upload all such information to foreman. This can be added > after first version however +1 I've wanted to use kafo to import data into Foreman for ages - but agreed, it's not the current focus of OFI :) From pbrady at redhat.com Tue Feb 25 11:36:00 2014 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Tue, 25 Feb 2014 11:36:00 +0000 Subject: [Rdo-list] CERN notes on upgrading their 50K core cloud to RDO Havana Message-ID: <530C8020.6010002@redhat.com> This is a very informative and useful summary of CERN's recent update to RDO 2013.2.2 http://openstack-in-production.blogspot.ch/2014/02/our-cloud-in-havana.html thanks! P?draig. From rbowen at redhat.com Tue Feb 25 14:40:16 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 25 Feb 2014 09:40:16 -0500 Subject: [Rdo-list] community IRC meeting Message-ID: <530CAB50.8000503@redhat.com> We had a rather short meeting this morning on IRC: Minutes: http://meetbot.fedoraproject.org/rdo/2014-02-25/rdo.2014-02-25-14.02.html Minutes (text): http://meetbot.fedoraproject.org/rdo/2014-02-25/rdo.2014-02-25-14.02.txt Log: http://meetbot.fedoraproject.org/rdo/2014-02-25/rdo.2014-02-25-14.02.log.html A reminder: we do this every week, and we'd love to have broader participation. If you want to help out with community things around RDO - events, meetups, the forum, and so on - come to #rdo on Freenode and jump right in. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Tue Feb 25 15:10:12 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 25 Feb 2014 10:10:12 -0500 Subject: [Rdo-list] Fwd: Re: Fwd: Re: Canonical OpenStack Messaging In-Reply-To: <20140225043901.GK2444@redhat.com> References: <20140225043901.GK2444@redhat.com> Message-ID: <530CB254.3000503@redhat.com> I received the following feedback from a user. Not sure if Daniel is on this list to answer followup questions, but I'll encourage him to join. --Rich -------- Original Message -------- Date: Tue, 25 Feb 2014 12:39:01 +0800 From: Daniel Veillard BTW I installed RDO from scratch on a CentOS 6.5 box 2 weeks ago, The Good: 1/ the instruction worked, no tweaking, packstack did work even with the horrible connection I'm having at the moment 2/ the setup instructions are clear and simple http://openstack.redhat.com/Quickstart however it should be made clear that the host where the installation is done should have a fully qualified DNS name as that's one of the small issue I hit, and people doing testing are likely to do it on machines without that set up 3/ I got the console working just fine The Less Good: Creating a CentOS test instance should be no more than an additional 2 steps, it wasn't http://openstack.redhat.com/Running_an_instance Step 1 if CONTROL_NODE is not the IP (previous document referenced an iP) then one need to pass a FQDN, in my case http://test/dashboard fails with "Openstack dashboard Something went wrong an unexpected error has occured ..." http://test.veillard.com/dashboard works http://http://192.168.0.12/dashboard works So either make sure the FQDN is part of the requirement in step 0 or like for the quickstart use $YOURIP that will also be coherent => I wonder how many get stuck at that step thinking their installation is broken Step 2 and 3 just fine Step 4: that's another issue, the default is a Fedora image, and that worked fine but for CentOS i had to - click the image resource link - then follow CentOS 6.5 images one end up at http://repos.fedorapeople.org/repos/openstack/guest-images/ a CentOs image, one month old, hosted on Fedora, with no checksum no README, etc ... I do think that http://openstack.redhat.com/Image_resources need to be updated to point to a real web page, on the CentOS project with a current image and followup instruction dedicated to people who were following the RDO instructions In comparison the http://cloud-images.ubuntu.com/ link for "Ubuntu cloud images" at least has some pointers, we can do way better and streamline for an RDO setup Step 5: Launch the instance instruction failed for me, just providing a name was not sufficient, the error was "At least one network must be specified." it got me to the Networking tab and i had to pick the private network from the available networks only selection, then launch worked Step 6: it looks like it worked and got a 172.24.4.127 IP, but the instance still says "IP address 10.0.0.3" , and trying in step 7 to ssh to 172.24.4.127 failed, both from the remote workstation and when logged as root Then messed up creating a local pool of IPs as suggested at http://openstack.redhat.com/Floating_IP_range but tuned to the local IP address, registering the network was fine but it never seems used when trying to associate a floating IP to the instance. => IMHO we need to fix quite a few things so that instructions work out of the box up to running and ssh'ing to a CentOS instance. Rich, are you able to sort and fix those points at least up to networking setup ? KB, do we have a page with QCow2 ready CentOS image on the centos site that we could link to, and provide RDO tuned instructions ? Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From veillard at redhat.com Tue Feb 25 15:43:06 2014 From: veillard at redhat.com (Daniel Veillard) Date: Tue, 25 Feb 2014 23:43:06 +0800 Subject: [Rdo-list] Fwd: Re: Fwd: Re: Canonical OpenStack Messaging Message-ID: <20140225154306.GA310@redhat.com> Rich wrote: > I received the following feedback from a user. Not sure if Daniel is > on this list to answer followup questions, but I'll encourage him to > join. I wasn't but I'm there now, (sorry for breaking the threading) Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veillard at redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/ From dneary at redhat.com Tue Feb 25 16:48:15 2014 From: dneary at redhat.com (Dave Neary) Date: Tue, 25 Feb 2014 17:48:15 +0100 Subject: [Rdo-list] Feedback on RDO install In-Reply-To: <530CB254.3000503@redhat.com> References: <20140225043901.GK2444@redhat.com> <530CB254.3000503@redhat.com> Message-ID: <530CC94F.3080304@redhat.com> ... and my reply to Daniel (reposted here): Daniel Veillard wrote: > The Less Good: > Creating a CentOS test instance should be no more than an > additional 2 steps, it wasn't > http://openstack.redhat.com/Running_an_instance > Step 5: > Launch the instance instruction failed for me, just providing a > name was not sufficient, the error was > "At least one network must be specified." > it got me to the Networking tab and i had to pick the private > network from the available networks only selection, then launch > worked With the move to Neutron, a certain number of things which should be done automatically are not done: 1) Set up an internal network & subnet for guests (I believe this is automatically done by Packstack now) 2) Create a default router and attach the internal subnet to it 3) (not sure if this can be done automatically, or whether it needs to be a post-install config step) Attach the external network as the gateway for the router, and create some floating IP addresses outside any DHCP range on the public network to associate with newly created instances > Step 6: > it looks like it worked and got a 172.24.4.127 IP, but the instance > still says "IP address 10.0.0.3" , and trying in step 7 to ssh > to 172.24.4.127 failed, both from the remote workstation and > when logged as root > > Then messed up creating a local pool of IPs as suggested at > http://openstack.redhat.com/Floating_IP_range > but tuned to the local IP address, registering the network was fine > but it never seems used when trying to associate a floating IP to > the instance. I think this is network namespace related: routing from an int to ext network isn't automatically done, you either need to create an L3 router and connect the networks to it, and/or ensure that your local IP address is routing directly to the instance via the router. Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From lars at redhat.com Tue Feb 25 17:04:44 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 25 Feb 2014 12:04:44 -0500 Subject: [Rdo-list] Feedback on RDO install In-Reply-To: <530CC94F.3080304@redhat.com> References: <20140225043901.GK2444@redhat.com> <530CB254.3000503@redhat.com> <530CC94F.3080304@redhat.com> Message-ID: <20140225170444.GA2943@redhat.com> On Tue, Feb 25, 2014 at 05:48:15PM +0100, Dave Neary wrote: > 1) Set up an internal network & subnet for guests (I believe this is > automatically done by Packstack now) This is done for --allinone installs, but not otherwise. > 2) Create a default router and attach the internal subnet to it And also this. > 3) (not sure if this can be done automatically, or whether it needs to > be a post-install config step) Attach the external network as the > gateway for the router, and create some floating IP addresses outside > any DHCP range on the public network to associate with newly created > instances And this isn't done either. Generally, the Quickstart instructions end before there is functional external connectivity, and the existing documentation for setting that up is a little sketchy. I was hoping to tackle some of this in the next week. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From veillard at redhat.com Wed Feb 26 03:25:34 2014 From: veillard at redhat.com (Daniel Veillard) Date: Wed, 26 Feb 2014 11:25:34 +0800 Subject: [Rdo-list] Feedback on RDO install In-Reply-To: <20140225170444.GA2943@redhat.com> References: <20140225043901.GK2444@redhat.com> <530CB254.3000503@redhat.com> <530CC94F.3080304@redhat.com> <20140225170444.GA2943@redhat.com> Message-ID: <20140226032534.GH310@redhat.com> On Tue, Feb 25, 2014 at 12:04:44PM -0500, Lars Kellogg-Stedman wrote: > On Tue, Feb 25, 2014 at 05:48:15PM +0100, Dave Neary wrote: > > 1) Set up an internal network & subnet for guests (I believe this is > > automatically done by Packstack now) > > This is done for --allinone installs, but not otherwise. > > > 2) Create a default router and attach the internal subnet to it > > And also this. yes, i got the private 10.0.0.0 network setup, a public one and a bridge between the two. > > 3) (not sure if this can be done automatically, or whether it needs to > > be a post-install config step) Attach the external network as the > > gateway for the router, and create some floating IP addresses outside > > any DHCP range on the public network to associate with newly created > > instances > > And this isn't done either. Generally, the Quickstart instructions > end before there is functional external connectivity, and the existing > documentation for setting that up is a little sketchy. > > I was hoping to tackle some of this in the next week. At that point i tried various things to have the 'public' network allocate IPs in the range of my local LAN, I tried for example [root at test ~(keystone_admin)]# nova floating-ip-bulk-delete 10.3.4.0/22 [root at test ~(keystone_admin)]# nova floating-ip-bulk-create 192.168.0.56/29 [root at test ~(keystone_admin)]# nova-manage floating list 2014-02-25 12:22:55.983 13516 DEBUG nova.openstack.common.lockutils [req-854a5ff1-baf3-43b3-b381-d23966d7ea04 None None] Got semaphore "dbapi_backend" lock /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:166 2014-02-25 12:22:56.006 13516 DEBUG nova.openstack.common.lockutils [req-854a5ff1-baf3-43b3-b381-d23966d7ea04 None None] Got semaphore / lock "__get_backend" inner /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:245 None 192.168.0.57 None nova eth0 None 192.168.0.58 None nova eth0 None 192.168.0.59 None nova eth0 None 192.168.0.60 None nova eth0 None 192.168.0.61 None nova eth0 None 192.168.0.62 None nova eth0 [root at test ~(keystone_admin)]# then i disasociated the floating IP of my guest which were allocated as 172.24.4.127 and tried to reallocate a floating IP but that didn't work it still picked 172.24.4.x Then I tried to create another public network matching my range and create a router from the private network, the operations worked at the interface level but I still cound not associate my instance interface to it. So right now I can create guests, I can't ssh to them, and I have no root password to log from the console (which works), it's a bit frustrating. Someone who understands the basic operations needed to have the default created public network allocate IPs in the range of the user network should document those operations, it should be added to the RDO setup page, because as is even if the setup worked it's not really functional :) And I'm just fine playing the guinea pig ! thanks, Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veillard at redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/ From Tim.Bell at cern.ch Wed Feb 26 07:43:00 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 26 Feb 2014 07:43:00 +0000 Subject: [Rdo-list] python-novaclient RPM version Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A4E12A@CERNXCHG41.cern.ch> In the RDO repositories, the python-novaclient version is python-novaclient-2.15.0-1.el6.noarch (from September 2013 in http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/). This seems to be missing some of the more recent functions such as shelve. Would it be possible to generate a later one ? Thanks Tim -- [cid:image003.png at 01CF32CE.C0ABF3A0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 34628 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Tim Bell.vcf Type: text/x-vcard Size: 4902 bytes Desc: Tim Bell.vcf URL: From kchamart at redhat.com Wed Feb 26 08:11:30 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 26 Feb 2014 13:41:30 +0530 Subject: [Rdo-list] Feedback on RDO install In-Reply-To: <20140226032534.GH310@redhat.com> References: <20140225043901.GK2444@redhat.com> <530CB254.3000503@redhat.com> <530CC94F.3080304@redhat.com> <20140225170444.GA2943@redhat.com> <20140226032534.GH310@redhat.com> Message-ID: <20140226081130.GC3859@tesla.redhat.com> On Wed, Feb 26, 2014 at 11:25:34AM +0800, Daniel Veillard wrote: > On Tue, Feb 25, 2014 at 12:04:44PM -0500, Lars Kellogg-Stedman wrote: > > On Tue, Feb 25, 2014 at 05:48:15PM +0100, Dave Neary wrote: [. . .] > At that point i tried various things to have the 'public' network > allocate IPs in the range of my local LAN, I tried for example > > [root at test ~(keystone_admin)]# nova floating-ip-bulk-delete 10.3.4.0/22 > [root at test ~(keystone_admin)]# nova floating-ip-bulk-create 192.168.0.56/29 > [root at test ~(keystone_admin)]# nova-manage floating list > 2014-02-25 12:22:55.983 13516 DEBUG nova.openstack.common.lockutils > [req-854a5ff1-baf3-43b3-b381-d23966d7ea04 None None] Got semaphore > "dbapi_backend" lock > /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:166 > 2014-02-25 12:22:56.006 13516 DEBUG nova.openstack.common.lockutils > [req-854a5ff1-baf3-43b3-b381-d23966d7ea04 None None] Got semaphore / > lock "__get_backend" inner > /usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:245 > None 192.168.0.57 None nova eth0 > None 192.168.0.58 None nova eth0 > None 192.168.0.59 None nova eth0 > None 192.168.0.60 None nova eth0 > None 192.168.0.61 None nova eth0 > None 192.168.0.62 None nova eth0 > [root at test ~(keystone_admin)]# > > then i disasociated the floating IP of my guest which were allocated > as 172.24.4.127 and tried to reallocate a floating IP but that didn't > work it still picked 172.24.4.x > > Then I tried to create another public network matching my range > and create a router from the private network, the operations worked > at the interface level but I still cound not associate my instance > interface to it. > So right now I can create guests, I can't ssh to them, and I have > no root password to log from the console (which works), it's a bit > frustrating. You may want to check if you have SSH (and ICMP) security group rules are correctly added[1]. > Someone who understands the basic operations needed to have the > default created public network allocate IPs in the range of the user > network should document those operations, it should be added to > the RDO setup page, because as is even if the setup worked it's not > really functional :) Thanks for taking time to try this. I see that you're using Nova network. FWIW, in my testing, I use Neutron networking (in a manually configured 2-node setup -- this helps for my nature of work involving a lot of debugging). I'd like to outline the flow here, in case if you/someone would like to try Neutron at some point: Concept of tenant (now renamed to 'project') network: User-A can have his/her own IP-namespace, private subnets while a User-B can _also_ have his/her own isoloated set of IP namespace, subnets, iptables namespace. Networks, subnets, routers creation ----------------------------------- Once Neutron configured, and its services are running, then the below applies: 1. Create a Keystone tenant called 'demoten1'. 2. Then, a Keystone user called 'tuser1' and associates it to the 'demoten1'. 3. Creates a Keystone RC file for the user (tuser1) and sources it. 4. Creates a new private network called 'priv-net1'. 5. Creates a new private subnet called 'priv-subnet1' on 'priv-net1'. 6. Creates a router called 'trouter1'. 7. Associates the router (trouter1 in this case) to an existing external network (the script assumes it?s called as ext) by setting it as its gateway. 8. Associates the private network interface (priv-net1) to the router (trouter1). 9. Adds Neutron security group rules for this test tenant (demoten1) for ICMP and SSH. There's a blog-post here[2] with all the commands noted for the above steps. Also, my iptables rules are here[3], at the end of the page. [1] http://docs.openstack.org/user-guide/content/configure_security_groups_rules.html [2] http://kashyapc.com/2013/12/13/script-to-create-neutron-tenant-networks/ [3] http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt -- /kashyap From pchalupa at redhat.com Wed Feb 26 09:30:35 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Wed, 26 Feb 2014 10:30:35 +0100 Subject: [Rdo-list] [OFI] Members list Message-ID: <530DB43B.1010306@redhat.com> Hi, I've added table of the all members to our wiki at http://projects.theforeman.org/projects/ofi/wiki/Members. I think it'll quite useful to have basic information (as name, irc, timezone, component) at one place. Please fill in your details. Petr From pchalupa at redhat.com Wed Feb 26 09:51:55 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Wed, 26 Feb 2014 10:51:55 +0100 Subject: [Rdo-list] [OFI] Wiki - Implementation_Notes Message-ID: <530DB93B.2060600@redhat.com> I've also started http://projects.theforeman.org/projects/ofi/wiki/Implementation_Notes page at our wiki. BTW Redmine can be set to send notifications about all wiki modifications. It can be done on http://projects.theforeman.org/my/account page. Choose "For any event on the selected projects only..." and then check box OFI. If we decide to keep all the wikis with architecture description there uptodate It can be quite useful since every body will be notified about any design/architecture changes. Petr From pbrady at redhat.com Wed Feb 26 11:55:13 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 26 Feb 2014 11:55:13 +0000 Subject: [Rdo-list] python-novaclient RPM version In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A4E12A@CERNXCHG41.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A4E12A@CERNXCHG41.cern.ch> Message-ID: <530DD621.2010105@redhat.com> On 02/26/2014 07:43 AM, Tim Bell wrote: > > > In the RDO repositories, the python-novaclient version is python-novaclient-2.15.0-1.el6.noarch (from September 2013 in http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/). > > This seems to be missing some of the more recent functions such as shelve. > > Would it be possible to generate a later one ? Well 2.15 is the latest release from upstream: https://review.openstack.org/gitweb?p=openstack/python-novaclient.git Russell you did the last release: $ git show 2.15.0 tag 2.15.0 Tagger: Russell Bryant Date: Wed Sep 18 08:38:05 2013 -0400 Would it be a good time to tag another? thanks, P?draig. From rbryant at redhat.com Wed Feb 26 12:06:07 2014 From: rbryant at redhat.com (Russell Bryant) Date: Wed, 26 Feb 2014 07:06:07 -0500 Subject: [Rdo-list] python-novaclient RPM version In-Reply-To: <530DD621.2010105@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A4E12A@CERNXCHG41.cern.ch> <530DD621.2010105@redhat.com> Message-ID: <530DD8AF.4060409@redhat.com> On 02/26/2014 06:55 AM, P?draig Brady wrote: > On 02/26/2014 07:43 AM, Tim Bell wrote: >> >> >> In the RDO repositories, the python-novaclient version is python-novaclient-2.15.0-1.el6.noarch (from September 2013 in http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/). >> >> This seems to be missing some of the more recent functions such as shelve. >> >> Would it be possible to generate a later one ? > > Well 2.15 is the latest release from upstream: > https://review.openstack.org/gitweb?p=openstack/python-novaclient.git > > Russell you did the last release: > > $ git show 2.15.0 > tag 2.15.0 > Tagger: Russell Bryant > Date: Wed Sep 18 08:38:05 2013 -0400 > > Would it be a good time to tag another? Yes. :-) I keep going to do it, and then I look at the list of code reviews and decide I want to try to get some more patches in first, and then I never go back to do the release. I'm just going to go ahead and cut a release now. We may do another one fairly soon though to pick up whatever else lands between now and the Icehouse release. -- Russell Bryant From hbrock at redhat.com Wed Feb 26 12:20:32 2014 From: hbrock at redhat.com (Hugh O. Brock) Date: Wed, 26 Feb 2014 07:20:32 -0500 Subject: [Rdo-list] [OFI] Members list In-Reply-To: <530DB43B.1010306@redhat.com> References: <530DB43B.1010306@redhat.com> Message-ID: <20140226122031.GB2952@redhat.com> On Wed, Feb 26, 2014 at 10:30:35AM +0100, Petr Chalupa wrote: > Hi, > > I've added table of the all members to our wiki at > http://projects.theforeman.org/projects/ofi/wiki/Members. I think > it'll quite useful to have basic information (as name, irc, > timezone, component) at one place. Please fill in your details. > > Petr > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list Thanks Petr, great idea. --Hugh -- == Hugh Brock, hbrock at redhat.com == == Senior Engineering Manager, Cloud Engineering == == Tuskar: Elastic Scaling for OpenStack == == http://github.com/tuskar == "I know that you believe you understand what you think I said, but I?m not sure you realize that what you heard is not what I meant." --Robert McCloskey From rbowen at redhat.com Wed Feb 26 15:44:10 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 26 Feb 2014 10:44:10 -0500 Subject: [Rdo-list] Multi-node deployment with packstack - Tomorrow, 10am (Hangout presentation) Message-ID: <530E0BCA.6000002@redhat.com> Tomorrow, at 10am, Lars Kellogg-Stedman will be presenting a walk-through of a multi-node deployment using packstack. Details are at http://openstack.redhat.com/Hangouts#Upcoming_Hangouts The presentation will be streamed live on YouTube, and available recorded after the fact if you can't attend live. We'll also be on IRC at #rdo-hangout on the Freenode network to discuss and answer questions during the event. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchalupa at redhat.com Wed Feb 26 15:56:34 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Wed, 26 Feb 2014 16:56:34 +0100 Subject: [Rdo-list] [OFI] Git repo Message-ID: <530E0EB2.4020506@redhat.com> Hi, do we have a repository already? If not what about forking https://github.com/theforeman/foreman_plugin_template into github.com/theforeman/OFI? Ohad could you set it up if agreed? Petr From shake.chen at gmail.com Wed Feb 26 16:49:38 2014 From: shake.chen at gmail.com (Shake Chen) Date: Thu, 27 Feb 2014 00:49:38 +0800 Subject: [Rdo-list] Bug for multinode deploy with packstack Message-ID: Hi Now I try to test mutinode deploy Opentack with packstack I use centos 6.5, GRE netwrok, two nic. (172.28.1.132 is my control node.) he bug is if you separate roles install like neutron in single node, and the control node not install compute service, the dashboard can not login, becasue the separate roles node, lack a iptables rule ,allow the control node access the roles node. this is my answer files http://paste2.org/GDCVUVd9 I also report to https://ask.openstack.org/en/question/12544/bug-for-multinode-deployment-with-packstack/ -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From shake.chen at gmail.com Wed Feb 26 16:54:00 2014 From: shake.chen at gmail.com (Shake Chen) Date: Thu, 27 Feb 2014 00:54:00 +0800 Subject: [Rdo-list] Multi-node deployment with packstack - Tomorrow, 10am (Hangout presentation) In-Reply-To: <530E0BCA.6000002@redhat.com> References: <530E0BCA.6000002@redhat.com> Message-ID: I think this is a bug for mutil node deployment. The bug is if you separate roles install, and the control node not install compute service, the dashboard can not login, becasue the separate roles node, lack a iptables rule ,allow the control node access the roles node. https://ask.openstack.org/en/question/12544/bug-for-multi-node-deployment-with-packstack/ and my answer file http://paste2.org/GDCVUVd9 On Wed, Feb 26, 2014 at 11:44 PM, Rich Bowen wrote: > Tomorrow, at 10am, Lars Kellogg-Stedman will be presenting a > walk-through of a multi-node deployment using packstack. Details are at > http://openstack.redhat.com/Hangouts#Upcoming_Hangouts > > The presentation will be streamed live on YouTube, and available recorded > after the fact if you can't attend live. We'll also be on IRC at > #rdo-hangout on the Freenode network to discuss and answer questions during > the event. > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaisonhttp://openstack.redhat.com/ > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Wed Feb 26 18:31:19 2014 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 26 Feb 2014 18:31:19 +0000 Subject: [Rdo-list] [RDO] OpenStack Summit sessions - please vote Message-ID: <000001446f7716d5-075d6c0a-7aae-42b3-995a-b5ddb8b750cf-000000@email.amazonses.com> rbowen started a discussion. OpenStack Summit sessions - please vote --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/967/openstack-summit-sessions-please-vote Have a great day! From kchamart at redhat.com Thu Feb 27 05:33:43 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 27 Feb 2014 11:03:43 +0530 Subject: [Rdo-list] Bug for multinode deploy with packstack In-Reply-To: References: Message-ID: <20140227053343.GB25995@tesla.redhat.com> On Thu, Feb 27, 2014 at 12:49:38AM +0800, Shake Chen wrote: > Hi > > Now I try to test mutinode deploy Opentack with packstack > > I use centos 6.5, GRE netwrok, two nic. (172.28.1.132 is my control node.) > > he bug is if you separate roles install like neutron in single node, and > the control node not install compute service, the dashboard can not login, > becasue the separate roles node, lack a iptables rule ,allow the control > node access the roles node. Thank you for trying this. But it is hard to follow what you're saying. That said, here's[1] my Neutron configs and iptables rules with GRE that worked for me on Fedora. Also, for future reference, please see this[2]. In short: providing clear information gets you better responses. [1] http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt [2] https://wiki.openstack.org/wiki/BugFilingRecommendations -- /kashyap From shake.chen at gmail.com Thu Feb 27 09:06:49 2014 From: shake.chen at gmail.com (Shake Chen) Date: Thu, 27 Feb 2014 17:06:49 +0800 Subject: [Rdo-list] Bug for multinode deploy with packstack In-Reply-To: <20140227053343.GB25995@tesla.redhat.com> References: <20140227053343.GB25995@tesla.redhat.com> Message-ID: Thanks I have found many bug about mutinode setup and try to report it. I can reproduce the bug. http://paste2.org/dzGnB2Zt I can provide any info if need. Now I l can not login dashboard, debug show can not connect neutron, so I need to ssh to neutron node add iptables rule -A INPUT -s 172.18.1.12/32 -p tcp -m multiport --dports 9696,67,68 -m comment --comment "001 neutron incoming 172.18.1.13" -j ACCEPT then work, I can login Dashboard. On Thu, Feb 27, 2014 at 1:33 PM, Kashyap Chamarthy wrote: > On Thu, Feb 27, 2014 at 12:49:38AM +0800, Shake Chen wrote: > > Hi > > > > Now I try to test mutinode deploy Opentack with packstack > > > > I use centos 6.5, GRE netwrok, two nic. (172.28.1.132 is my control > node.) > > > > he bug is if you separate roles install like neutron in single node, and > > the control node not install compute service, the dashboard can not > login, > > becasue the separate roles node, lack a iptables rule ,allow the control > > node access the roles node. > > Thank you for trying this. > > But it is hard to follow what you're saying. > > That said, here's[1] my Neutron configs and iptables rules with GRE that > worked for me on Fedora. > > Also, for future reference, please see this[2]. In short: providing > clear information gets you better responses. > > > [1] > http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt > [2] https://wiki.openstack.org/wiki/BugFilingRecommendations > > > -- > /kashyap > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Thu Feb 27 13:07:55 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 27 Feb 2014 13:07:55 +0000 Subject: [Rdo-list] [package announce] openstack clients update Message-ID: <530F38AB.2090907@redhat.com> Havana RDO client packages have been updated as follows: python-neutronclient-2.3.1-2 -> 2.3.1-3 Fix incompatibility with neutron-2013.2.2 http://pad.lv/1277120 python-swiftclient-1.8.0 -> python-swiftclient-2.0.2 Remove multipart/form-data file upload Fix --insecure option on auth Port to python-requests Fix swiftclient help Install manpage in share/man/man1 instead of man/man1 Add capabilities option Install swiftclient manpage Add --object-name retry on ratelimit Fix help of some optional arguments Enable usage of proxies defined in environment (http(s)_proxy). Don't crash when header is value of None Fix download bandwidth for swift command. Allow custom headers when using swift download (CLI) Add close to swiftclient.client.Connection enhance swiftclient logging Clarify main help for post subcommand Fixes python-swiftclient debugging message Add verbose output to all stat commands Skip sniffing and reseting if retry is disabled user defined headers added to swift post queries Extend usage message for `swift download` python-novaclient-2.15.0 -> 2.16.0 Invalid client version message unclear Fix i18n messages in novaclient, part II Update broken command line reference link Remove invalid parameter of quota-update Adds support for the get_rdp_console API Fixed polling after boot in shell Fix Serivce class AttributeError [UT] Fixed floating_ip_pools fake return to expected one [UT] Removed duplicate key from dict in fake baremetal_node Flavor ExtraSpecs containing '/' cannot be deleted Fix i18n messages in novaclient, part I Adds ability to boot a server via the Nova V3 API Removes unsupported volume commands from V3 API support Fix logic for "nova flavor-show 0#" Don't call CS if a token + URL are provided Adds volume support for the V3 API Fixes ambiguous cli output between "None" and NoneType Support list deleted servers for admin Using floating-ip-{associate|disassociate} Adds quota usage support for the V3 API Fix tab-completion of --flags under OS X Remove class_name parameter from quota_class Ensure that the diagnostics are user friendly Added v3 interfaces in reference doc Generate interfaces reference doc Ensure that nova client prints dictionaries and arrays correctly Allow empty response in service-list Nova aggregate-details should be more human friendly Adding additional tests for novaclient ssh Fix "device" as the optional para on volume-attach Adds simple tenant usage support for the Nova V3 API Adds keypairs support for the Nova V3 API Adds certificates support for Nova V3 API Adds aggregates support for Nova V3 API Adds hypervisor support for Nova V3 API Adds services support for Nova V3 API Adds second part of quotas support for Nova V3 API Adds first part of quotas support for Nova V3 API Adds availability zone support for Nova V3 API Adds basic servers support for the Nova V3 API add support for nova ssh user at host Allow multiple volume delete from cli like Cinder Expose the rebuild preserve-ephemeral extension Stop using deprecated keyring backends Adds images support for Nova V3 API Remove commands not supported by Nova V3 API Adds agent support for Nova V3 API Adds flavor access support for Nova V3 API Adds flavor support for Nova V3 API Allow graceful shutdown on Ctrl+C add support for server set metadata item Fix incorrect help message on flavor_access action Sets default service type for Nova V3 API Adds a --show option to the image-create subcommand Allows users to retrieve ciphered VM passwords Removes unnecessary pass nova security-group-* should support uuid as input Flatten hypervisor-show dictionary for printing Print security groups as a human readable list Adds locking to completion caches Make 'nova ssh' automatically fall back to private address Quote URL in curl output to handle query params Add --insecure to curl output if required Remove deprecated NOVA_RAX_AUTH Print dicts in alphabetical order Make os-cache retry on an invalid token Document and make OS_CACHE work Add shelve/unshelve/shelve-offload command if we have a valid auth token, use it instead of generating a new one Fix AttributeError in Keypair._add_details() Make nova CLI use term "server" where possible Novaclient shell list command should support a minimal server list Add v3 HostManager From pbrady at redhat.com Thu Feb 27 13:15:37 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 27 Feb 2014 13:15:37 +0000 Subject: [Rdo-list] python-novaclient RPM version In-Reply-To: <530DD8AF.4060409@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D9A4E12A@CERNXCHG41.cern.ch> <530DD621.2010105@redhat.com> <530DD8AF.4060409@redhat.com> Message-ID: <530F3A79.60607@redhat.com> On 02/26/2014 12:06 PM, Russell Bryant wrote: > On 02/26/2014 06:55 AM, P?draig Brady wrote: >> On 02/26/2014 07:43 AM, Tim Bell wrote: >>> >>> >>> In the RDO repositories, the python-novaclient version is python-novaclient-2.15.0-1.el6.noarch (from September 2013 in http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/). >>> >>> This seems to be missing some of the more recent functions such as shelve. >>> >>> Would it be possible to generate a later one ? >> >> Well 2.15 is the latest release from upstream: >> https://review.openstack.org/gitweb?p=openstack/python-novaclient.git >> >> Russell you did the last release: >> >> $ git show 2.15.0 >> tag 2.15.0 >> Tagger: Russell Bryant >> Date: Wed Sep 18 08:38:05 2013 -0400 >> >> Would it be a good time to tag another? > > Yes. :-) > > I keep going to do it, and then I look at the list of code reviews and > decide I want to try to get some more patches in first, and then I never > go back to do the release. I'm just going to go ahead and cut a release > now. We may do another one fairly soon though to pick up whatever else > lands between now and the Icehouse release. Now released to RDO Havana. Details at: https://www.redhat.com/archives/rdo-list/2014-February/msg00113.html From lars at redhat.com Thu Feb 27 13:47:55 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 27 Feb 2014 08:47:55 -0500 Subject: [Rdo-list] Bug for multinode deploy with packstack In-Reply-To: References: <20140227053343.GB25995@tesla.redhat.com> Message-ID: <20140227134755.GB7658@redhat.com> On Thu, Feb 27, 2014 at 05:06:49PM +0800, Shake Chen wrote: > I have found many bug about mutinode setup and try to report it. Note that at 10AM Eastern *today* (2/27), I'll be giving an RDO hangout on multinode installs with packstack. Details are here: http://openstack.redhat.com/Hangouts I'll be running through a complete install, including a number of post-install configuration steps. Cheers, -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Thu Feb 27 14:41:05 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 27 Feb 2014 09:41:05 -0500 Subject: [Rdo-list] Multi-node deployment with packstack - Tomorrow, 10am (Hangout presentation) In-Reply-To: <530E0BCA.6000002@redhat.com> References: <530E0BCA.6000002@redhat.com> Message-ID: <530F4E81.7030007@redhat.com> On 02/26/2014 10:44 AM, Rich Bowen wrote: > Tomorrow, at 10am, Lars Kellogg-Stedman will be presenting a > walk-through of a multi-node deployment using packstack. Details are > at http://openstack.redhat.com/Hangouts#Upcoming_Hangouts > > The presentation will be streamed live on YouTube, and available > recorded after the fact if you can't attend live. We'll also be on IRC > at #rdo-hangout on the Freenode network to discuss and answer > questions during the event. > The event will be streamed live at https://plus.google.com/events/cm9ff549vmsim737lj7hopk4gao and we'll be on #rdo-hangout for discussion and questions. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtaylor at redhat.com Thu Feb 27 16:27:14 2014 From: mtaylor at redhat.com (Martyn Taylor) Date: Thu, 27 Feb 2014 16:27:14 +0000 Subject: [Rdo-list] [foreman-dev] [OFI] RFC: Dynflow Orchestration. Initial Ideas / User Stories In-Reply-To: <530CB31E.2090106@redhat.com> References: <530CB31E.2090106@redhat.com> Message-ID: <530F6762.50802@redhat.com> Including RDO List On 25/02/14 15:13, Martyn Taylor wrote: > All, > > We've started pulling together a plan for adding orchestration in the > OpenStack Foreman Installer. > > We have a bunch of User Stories and some implementation notes here: > http://pad-katello.rhcloud.com/p/ofi-orchestration that we would like > feedback on. > > Please have a read over and let us know if there are any obvious > issues / gaps. We'll be working from the etherpad, please add > comments, questions there. Please add : so we can > keep track of discussion. > > Thank alot > Martyn > > -- > You received this message because you are subscribed to the Google > Groups "foreman-dev" group. > To unsubscribe from this group and stop receiving emails from it, send > an email to foreman-dev+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xzhao at bnl.gov Thu Feb 27 18:27:09 2014 From: xzhao at bnl.gov (Xin Zhao) Date: Thu, 27 Feb 2014 13:27:09 -0500 Subject: [Rdo-list] multiple nova-conductor settings Message-ID: <530F837D.2080608@bnl.gov> Hello, There is a patch for setting up multiple nova-conductors (https://review.openstack.org/#/c/42342/), I wonder if this is included in the grizzly packages on RDO repo ? And any documents on how to configure it ? I am now running openstack-nova-conductor-2013.1.4-6.el6.noarch on my controller. Thanks, Xin From pbrady at redhat.com Thu Feb 27 20:16:47 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 27 Feb 2014 20:16:47 +0000 Subject: [Rdo-list] multiple nova-conductor settings In-Reply-To: <530F837D.2080608@bnl.gov> References: <530F837D.2080608@bnl.gov> Message-ID: <530F9D2F.2020008@redhat.com> On 02/27/2014 06:27 PM, Xin Zhao wrote: > Hello, > > There is a patch for setting up multiple nova-conductors (https://review.openstack.org/#/c/42342/), I wonder if this is included in the grizzly packages on RDO repo ? And any documents on how to configure it ? > > I am now running openstack-nova-conductor-2013.1.4-6.el6.noarch on my controller. You're in luck. That patch was backported to RDO grizzly, and is available in the version you have installed: https://bugzilla.redhat.com/1012148 If you drill down through your referenced commit above, you can see the single config variable used to configure it: https://review.openstack.org/#/c/42342/5/etc/nova/nova.conf.sample thanks, P?draig. From rbowen at redhat.com Thu Feb 27 21:57:47 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 27 Feb 2014 16:57:47 -0500 Subject: [Rdo-list] [Rdo-newsletter] Vote for the OpenStack Summit schedule Message-ID: <530FB4DB.1030002@redhat.com> Hello, RDO fans, This isn't the regularly scheduled monthly newsletter - I wanted to tell you about the opportunity to influence the schedule of the most important event in the OpenStack ecosystem. The OpenStack Summit is just around the corner - May 12-16 in Atlanta, Georgia. Right now, voting is ongoing to decide what presentations will appear at that event. While we, of course, would like to see folks from the RDO community have their talks selected (we've listed some of them at - http://openstack.redhat.com/forum/discussion/967/openstack-summit-sessions-please-vote - we encourage you to vote for the talks that you're most interested in, at https://www.openstack.org/vote-atlanta The vote closes on March 2nd - this weekend, so don't wait too long! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From goncalo at lip.pt Fri Feb 28 10:10:15 2014 From: goncalo at lip.pt (=?ISO-8859-1?Q?Gon=E7alo_Borges?=) Date: Fri, 28 Feb 2014 10:10:15 +0000 Subject: [Rdo-list] VXLAN support Message-ID: <53106087.70409@lip.pt> Hi Guys... I would like to send a couple of questions regarding VXLAN support in RDO. Let me summary the context: 1) The documentation regarding VXLAN is available here: http://openstack.redhat.com/Using_VXLAN_Tenant_Networks 2) The previous link says that VXLAN will be supported in RDO after solving the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1021778 3) The previous bug seems to be already solved in openstack-packstack-2013.2.1-0.23.dev979.el6ost. Now the question: I have installed openstack-packstack-2013.2.1-0.29.dev956.el6.noarch (higher than previous fixed version) so VXLAN should be natively supported. However I do not find any documentation on how to change the answers file. There are some recipes where you start by deploying a GRE configuration and that change manually some items to support VXLAN. However, if VXLAN is already supported in RDO then it should be configured directly from the answers file. Is there some documentation on that topic? TIA Cheers Goncalo -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1668 bytes Desc: S/MIME Cryptographic Signature URL: From pchalupa at redhat.com Fri Feb 28 16:24:08 2014 From: pchalupa at redhat.com (Petr Chalupa) Date: Fri, 28 Feb 2014 17:24:08 +0100 Subject: [Rdo-list] [OFI] Dev setup Message-ID: <5310B828.2080101@redhat.com> Hi, I've put together quickly some notes how I've installed my development setup. It may be helpful, so here it is: http://pad-katello.rhcloud.com/p/foreman-install-ofi There may be some bugs I did it retrospectively so I may have forgotten to put something there. Petr From katy.thomas at edatalist.com Fri Feb 28 20:44:59 2014 From: katy.thomas at edatalist.com (Katy Thomas) Date: Fri, 28 Feb 2014 15:44:59 -0500 Subject: [Rdo-list] Open-Source Software Products Users Contacts Release Q1 Message-ID: Hi, Would you be interested in acquiring our open-source software products users list with the updated count and verified contacts? The list has been recently verified in Fourth week of January is ready to use and this information database can be helpful for your email marketing campaign, tele-marketing campaign and other marketing campaign initiatives for unlimited usages. 21st February 2014 Televerified contacts. ? Ubuntu users ? CentOS users ? Debian users ? Solaris users ? openSUSE users We have a global reach and can customize your list to offer you a more targeted approach. Our lists are Tele-verified and guarantee 100% of data accuracy. We do provide data cleansing and appending services to resolve this issue. If interested, let me know your target criteria so that I can get back to you with relevant information. If interested, let me know your target criteria so that I can get back to you with relevant information. Would such detail give you an edge over your competition? Could we talk more? Regards, Katy Thomas, Demand Generation Manager eDatalist LLC List acquisition I Tracked Email campaign I Email/Data Appending I Search Engine Optimization I Custom Built List I Tele Marketing I Multi Channel Marketing I Web-site Designing I If you do not wish to receive future emails from us, please reply as 'leave out' -------------- next part -------------- An HTML attachment was scrubbed... URL: