From vaibhav.k.agarwal at in.com Tue Apr 2 06:01:17 2013 From: vaibhav.k.agarwal at in.com (Kumar Vaibhav) Date: Tue, 02 Apr 2013 11:31:17 +0530 Subject: [rhos-list] Add user field in the instance panel of dashboard Message-ID: <1364882477.cc0991344c3d760ae42259064406bae1@mail.in.com> Hi,Thanks for the help. It worked for me.Regards,Vaibhav Original message From:"Julie Pichon"< jpichon at redhat.com >Date: 28 Mar 13 16:15:33Subject: Re: [rhoslist] Add user field in the instance panel of dashboardTo: Kumar Vaibhav Cc: rhoslist Hi,"Kumar Vaibhav"wrote:> Hi,As a admin I find it difficult to know instanceuser mapping.I want> to add the userid information to be added for the Dashboard on the> instance page. Is it possible to do this? How I can augment the> dashboard to add this feature?Regards,VaibhavIt should be possible to do this.There is some documentation on how to modify an existing panel:http://docs.openstack.org/developer/horizon/topics/customizing.html#modifyingexistingdashboardsandpanelsI haven't tried this but I think your best bet would be to create a new customised panel that extends the existing one, in which you modify the Table to include the additional field. Then you would unregister the existing Instances panel using the previous docs, and regis ter yours to replace it.You can see the existing Table at https://github.com/openstack/horizon/blob/stable/folsom/horizon/dashboards/syspanel/instances/tables.py#L44 . Looking at that code, it looks like there already used to be a userid (line 64) but it's commented out due to scalability concerns when expanding it into a name. If you only care about the userid, this is the line you should add to your customised Table.Hope this helps,JulieGet Yourself a cool, short @in.com Email ID now! -------------- next part -------------- An HTML attachment was scrubbed... URL: From vaibhav.k.agarwal at in.com Tue Apr 2 06:03:15 2013 From: vaibhav.k.agarwal at in.com (Kumar Vaibhav) Date: Tue, 02 Apr 2013 11:33:15 +0530 Subject: [rhos-list] Control Access to instance termination Message-ID: <1364882595.c850371fda6892fbfd1c5a5b457e5777@mail.in.com> Hi,This doesn't works. As I am not able to restrict the content to be userid based.Need to do some code changes in getall function.Regards,Vaibhav Original message From:"Eoghan Glynn"< eglynn at redhat.com >Date: 26 Mar 13 20:29:46Subject: Re: [rhoslist] Control Access to instance terminationTo: Kumar Vaibhav Cc: rhoslist > > Hi,> > > > Thanks for the help.> > This seems to solve one part of my problem of changing the state of> > the instance.> > A user cannot delete the other users' instance.> > Great.>> > However the listing problem still continues to exist. I checked the> > logs and found that getall access control is possible by using the> > policy.json. But getall function itself uses the filter of> > 'projectid' from the context. So other part seems to be difficult.> > I'm sure I see the problem here, as nova.compute.api.API.getalls/I'm sure/I'm not sure/> bases its policy enforcement check on a target that includes both> the projectid *and* userid:> > https://github.com/o penstack/nova/blob/stable/folsom/nova/compute/api.py#L1116> > So it seems to me that a rule based on userid would be applicable> in this case also. Again I've just done a quick test against master,> please let me know if the behavior you're seeing with your version> of RHOS is different.> > Cheers,> Eoghan> >> > Regards,> > Vaibhav> > > > Original message > > > > > > From:"Eoghan Glynn"< eglynn at redhat.com >> > Date: 25 Mar 13 22:08:26> > Subject: Re: [rhoslist] Control Access to instance termination> > To: Kumar Vaibhav > > Cc: rhoslist > > > > > > > > > or using the older syntax:> > > > > > [["role:admin"], ["role:projectadmin",> > > "projectid:%(projectid)s"]], ["userid:%(userid)s"]]> > > > Typo:> > > > [["role:admin"], ["role :projectadmin",> > "projectid:%(projectid)s"], ["userid:%(userid)s"]]> > > > > > > > > > Get Yourself a cool, short @in.com Email ID now!> > > rhoslist mailing list> rhoslist at redhat.com> https://www.redhat.com/mailman/listinfo/rhoslist> Get Yourself a cool, short @in.com Email ID now! -------------- next part -------------- An HTML attachment was scrubbed... URL: From vaibhav.k.agarwal at in.com Tue Apr 2 06:09:05 2013 From: vaibhav.k.agarwal at in.com (Kumar Vaibhav) Date: Tue, 02 Apr 2013 11:39:05 +0530 Subject: [rhos-list] Actions command not working Message-ID: <1364882945.ec26fc2eb2b75aece19c70392dc744c2@mail.in.com> Hi,I am trying to use 'nova actions ' but it givesERROR: n/a (HTTP 404)in nova/api.log I get this 20130402 11:35:26 INFO nova.osapicompute.wsgi.server [req05718f397d7e4653afe77b223aa5f502 nirbhayc compd] 90.3.26.52 [02/Apr/2013 11:35:26] "GET /v2/compd/servers/f301d67520ae42c38ee39ae6f5467a90/actions HTTP/1.1" 404 176 0.078454 And as mentioned on this page http://api.openstack.org/apiref.htmlfunction osinstanceactions is also giving the same error.Regards,VaibhavDear rhoslist ! Get Yourself a cool, short @in.com Email ID now! -------------- next part -------------- An HTML attachment was scrubbed... URL: From eglynn at redhat.com Tue Apr 2 08:18:42 2013 From: eglynn at redhat.com (Eoghan Glynn) Date: Tue, 2 Apr 2013 04:18:42 -0400 (EDT) Subject: [rhos-list] Control Access to instance termination In-Reply-To: <1364882595.c850371fda6892fbfd1c5a5b457e5777@mail.in.com> References: <1364882595.c850371fda6892fbfd1c5a5b457e5777@mail.in.com> Message-ID: <2062536290.658917.1364890722294.JavaMail.root@redhat.com> > Hi,This doesn't works. As I am not able to restrict the content to be userid > based.Need to do some code changes in getall function.Regards,Vaibhav Hi, Can you file a bug describing your expectations and the exact behavior that you're seeing? We can then proceed to getting a fix in place if necessary. Thanks! Eoghan From prmarino1 at gmail.com Tue Apr 2 14:20:28 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 2 Apr 2013 10:20:28 -0400 Subject: [rhos-list] Quantum RPC errors Message-ID: I have a strange one I keep seeing this in quantums logs " 2013-04-02 09:50:59 ERROR [quantum.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 1 seconds. 2013-04-02 09:51:00 INFO [quantum.openstack.common.rpc.common] Reconnecting to AMQP server on localhost:5672 2013-04-02 09:51:00 ERROR [quantum.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 3 seconds. 2013-04-02 09:51:03 INFO [quantum.openstack.common.rpc.common] Reconnecting to AMQP server on localhost:5672 2013-04-02 09:51:03 ERROR [quantum.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 5 seconds. 2013-04-02 09:51:08 INFO [quantum.openstack.common.rpc.common] Reconnecting to AMQP server on localhost:5672 2013-04-02 09:51:08 ERROR [quantum.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 7 seconds. 2013-04-02 09:51:15 INFO [quantum.openstack.common.rpc.common] Reconnecting to AMQP server on localhost:5672 2013-04-02 09:51:15 ERROR [quantum.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 9 seconds. 2013-04-02 09:51:24 INFO [quantum.openstack.common.rpc.common] Reconnecting to AMQP server on localhost:5672 2013-04-02 09:51:24 ERROR [quantum.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 11 seconds. " now there are two strange things about this 1) qpid is running on the same host as quantum server " netstat -tlnp |grep qpid tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 21954/qpidd tcp 0 0 :::5672 :::* LISTEN 21954/qpidd " 2) its trying to connect to localhost when qpid_hostname in the quantum.conf is set to the fully quallified domain name of the server. also rpc_back_end is set to quantum.openstack.common.rpc.impl_qpid and it gets better if i set rabbit_host it still doesn't connect but i at least see the right host name in the logs, I think there may be a bug here but I'm not sure yet. has any one else seen this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Tue Apr 2 14:50:31 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 2 Apr 2013 10:50:31 -0400 Subject: [rhos-list] Quantum RPC errors In-Reply-To: References: Message-ID: this is definitely starting to look like a bug it happens with openstack-quantum-2012.2.3 and not with openstack-quantum-2012.2.1 On Tue, Apr 2, 2013 at 10:20 AM, Paul Robert Marino wrote: > I have a strange one > > I keep seeing this in quantums logs > " > 2013-04-02 09:50:59 ERROR [quantum.openstack.common.rpc.common] AMQP > server on localhost:5672 is unreachable: Socket closed. Trying again in 1 > seconds. > 2013-04-02 09:51:00 INFO [quantum.openstack.common.rpc.common] > Reconnecting to AMQP server on localhost:5672 > 2013-04-02 09:51:00 ERROR [quantum.openstack.common.rpc.common] AMQP > server on localhost:5672 is unreachable: Socket closed. Trying again in 3 > seconds. > 2013-04-02 09:51:03 INFO [quantum.openstack.common.rpc.common] > Reconnecting to AMQP server on localhost:5672 > 2013-04-02 09:51:03 ERROR [quantum.openstack.common.rpc.common] AMQP > server on localhost:5672 is unreachable: Socket closed. Trying again in 5 > seconds. > 2013-04-02 09:51:08 INFO [quantum.openstack.common.rpc.common] > Reconnecting to AMQP server on localhost:5672 > 2013-04-02 09:51:08 ERROR [quantum.openstack.common.rpc.common] AMQP > server on localhost:5672 is unreachable: Socket closed. Trying again in 7 > seconds. > 2013-04-02 09:51:15 INFO [quantum.openstack.common.rpc.common] > Reconnecting to AMQP server on localhost:5672 > 2013-04-02 09:51:15 ERROR [quantum.openstack.common.rpc.common] AMQP > server on localhost:5672 is unreachable: Socket closed. Trying again in 9 > seconds. > 2013-04-02 09:51:24 INFO [quantum.openstack.common.rpc.common] > Reconnecting to AMQP server on localhost:5672 > 2013-04-02 09:51:24 ERROR [quantum.openstack.common.rpc.common] AMQP > server on localhost:5672 is unreachable: Socket closed. Trying again in 11 > seconds. > " > > now there are two strange things about this > > 1) qpid is running on the same host as quantum server > " > netstat -tlnp |grep qpid > tcp 0 0 0.0.0.0:5672 0.0.0.0:* > LISTEN 21954/qpidd > tcp 0 0 :::5672 > :::* LISTEN 21954/qpidd > " > > 2) its trying to connect to localhost when qpid_hostname in the > quantum.conf is set to the fully quallified domain name of the server. > > also rpc_back_end is set to quantum.openstack.common.rpc.impl_qpid > > > and it gets better if i set rabbit_host it still doesn't connect but i at > least see the right host name in the logs, > > > I think there may be a bug here but I'm not sure yet. > > has any one else seen this? > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Tue Apr 2 15:10:37 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 2 Apr 2013 11:10:37 -0400 Subject: [rhos-list] Quantum RPC errors In-Reply-To: References: Message-ID: Ive created a bug ticket https://bugzilla.redhat.com/show_bug.cgi?id=947498 On Tue, Apr 2, 2013 at 10:50 AM, Paul Robert Marino wrote: > this is definitely starting to look like a bug > > it happens with openstack-quantum-2012.2.3 and not with > openstack-quantum-2012.2.1 > > > > On Tue, Apr 2, 2013 at 10:20 AM, Paul Robert Marino wrote: > >> I have a strange one >> >> I keep seeing this in quantums logs >> " >> 2013-04-02 09:50:59 ERROR [quantum.openstack.common.rpc.common] AMQP >> server on localhost:5672 is unreachable: Socket closed. Trying again in 1 >> seconds. >> 2013-04-02 09:51:00 INFO [quantum.openstack.common.rpc.common] >> Reconnecting to AMQP server on localhost:5672 >> 2013-04-02 09:51:00 ERROR [quantum.openstack.common.rpc.common] AMQP >> server on localhost:5672 is unreachable: Socket closed. Trying again in 3 >> seconds. >> 2013-04-02 09:51:03 INFO [quantum.openstack.common.rpc.common] >> Reconnecting to AMQP server on localhost:5672 >> 2013-04-02 09:51:03 ERROR [quantum.openstack.common.rpc.common] AMQP >> server on localhost:5672 is unreachable: Socket closed. Trying again in 5 >> seconds. >> 2013-04-02 09:51:08 INFO [quantum.openstack.common.rpc.common] >> Reconnecting to AMQP server on localhost:5672 >> 2013-04-02 09:51:08 ERROR [quantum.openstack.common.rpc.common] AMQP >> server on localhost:5672 is unreachable: Socket closed. Trying again in 7 >> seconds. >> 2013-04-02 09:51:15 INFO [quantum.openstack.common.rpc.common] >> Reconnecting to AMQP server on localhost:5672 >> 2013-04-02 09:51:15 ERROR [quantum.openstack.common.rpc.common] AMQP >> server on localhost:5672 is unreachable: Socket closed. Trying again in 9 >> seconds. >> 2013-04-02 09:51:24 INFO [quantum.openstack.common.rpc.common] >> Reconnecting to AMQP server on localhost:5672 >> 2013-04-02 09:51:24 ERROR [quantum.openstack.common.rpc.common] AMQP >> server on localhost:5672 is unreachable: Socket closed. Trying again in 11 >> seconds. >> " >> >> now there are two strange things about this >> >> 1) qpid is running on the same host as quantum server >> " >> netstat -tlnp |grep qpid >> tcp 0 0 0.0.0.0:5672 0.0.0.0:* >> LISTEN 21954/qpidd >> tcp 0 0 :::5672 >> :::* LISTEN 21954/qpidd >> " >> >> 2) its trying to connect to localhost when qpid_hostname in the >> quantum.conf is set to the fully quallified domain name of the server. >> >> also rpc_back_end is set to quantum.openstack.common.rpc.impl_qpid >> >> >> and it gets better if i set rabbit_host it still doesn't connect but i at >> least see the right host name in the logs, >> >> >> I think there may be a bug here but I'm not sure yet. >> >> has any one else seen this? >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Thu Apr 4 08:34:35 2013 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Thu, 4 Apr 2013 10:34:35 +0200 Subject: [rhos-list] Grizzly, Packstack and 6.4 Message-ID: <515D3B1B.8060801@cern.ch> Hi, * git clone --recursive git://github.com/stackforge/packstack.git * packstack --gen-answer-file=grizzly.txt * Edit grizzly: CONFIG_REPO=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ * packstack --answer-file=grizzly.txt -d 1/ openstack-keystone fails to start because the log file is own by root user. => changing ownership to "keystone" works. 2/ nova packages are not installed. Nothing reported in logs => Puppet fails : Could not autoload nova_config: Could not auto load /var/lib/puppet/lib/puppet/provider/nova_config/parsed.rb: undefined method `default_target' for Puppet::Type::Nova_config:Class at /var/tmp/packstack/9428f6387b344b1897dc3057ae82437c/manifests/10.32.22.19_nova.pp:31 Github issue is not enabled, where should we report bugs ? bugzilla ? Anybody succeed installing Grizzly with packstack on 6.4 ? cheers, Thomas From jistr at redhat.com Thu Apr 4 09:35:27 2013 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 04 Apr 2013 11:35:27 +0200 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: <515D3B1B.8060801@cern.ch> References: <515D3B1B.8060801@cern.ch> Message-ID: <515D495F.9020609@redhat.com> Hi Thomas, > 1/ openstack-keystone fails to start because the log file is own by root user. > => changing ownership to "keystone" works. I hit the same issue and resolved it the same way. I think it's already been reported: https://bugzilla.redhat.com/show_bug.cgi?id=946915 > 2/ nova packages are not installed. Nothing reported in logs > => Puppet fails : Could not autoload nova_config: Could not auto load /var/lib/puppet/lib/puppet/provider/nova_config/parsed.rb: undefined method `default_target' for Puppet::Type::Nova_config:Class at /var/tmp/packstack/9428f6387b344b1897dc3057ae82437c/manifests/10.32.22.19_nova.pp:31 I didn't hit this one. After fixing the keystone issue, OpenStack installed fine. I added the grizzly epel repo manually rather than let Packstack do it (I didn't know it could do it for me :) ), but I think this doesn't make any difference wrt your issue. I used commit 34300029ff, so maybe you could try `git checkout 34300029ff && git submodule init && git submodule update`. > Github issue is not enabled, where should we report bugs ? bugzilla ? Yes I think so. https://bugzilla.redhat.com/buglist.cgi?component=openstack-packstack&product=Fedora Regards, Jiri From derekh at redhat.com Thu Apr 4 10:15:08 2013 From: derekh at redhat.com (Derek Higgins) Date: Thu, 04 Apr 2013 11:15:08 +0100 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: <515D495F.9020609@redhat.com> References: <515D3B1B.8060801@cern.ch> <515D495F.9020609@redhat.com> Message-ID: <515D52AC.3070600@redhat.com> On 04/04/2013 10:35 AM, Ji?? Str?nsk? wrote: > Hi Thomas, > >> 1/ openstack-keystone fails to start because the log file is own by >> root user. >> => changing ownership to "keystone" works. > > I hit the same issue and resolved it the same way. I think it's already > been reported: > > https://bugzilla.redhat.com/show_bug.cgi?id=946915 by the looks of it this is being cause by a change in keystone-manage, specifically this commit https://github.com/openstack/keystone/commit/a198f59df7063424dcf682430d24ba67dc562b79 keystone-manage didn't used to create the keystone.log file but now it does If you want to try it, I submitted a update to packstack https://review.openstack.org/#/c/26075/ > >> 2/ nova packages are not installed. Nothing reported in logs >> => Puppet fails : Could not autoload nova_config: Could not auto load >> /var/lib/puppet/lib/puppet/provider/nova_config/parsed.rb: undefined >> method `default_target' for Puppet::Type::Nova_config:Class at >> /var/tmp/packstack/9428f6387b344b1897dc3057ae82437c/manifests/10.32.22.19_nova.pp:31 >> I havn't seen this problem before, what version of puppet are you using ? > > I didn't hit this one. After fixing the keystone issue, OpenStack > installed fine. I added the grizzly epel repo manually rather than let > Packstack do it (I didn't know it could do it for me :) ), but I think > this doesn't make any difference wrt your issue. > > I used commit 34300029ff, so maybe you could try `git checkout > 34300029ff && git submodule init && git submodule update`. > >> Github issue is not enabled, where should we report bugs ? bugzilla ? > > Yes I think so. yup, we're using bugzilla > > https://bugzilla.redhat.com/buglist.cgi?component=openstack-packstack&product=Fedora > > > > Regards, > > Jiri > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From nicolas.vogel at heig-vd.ch Tue Apr 9 12:26:08 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 9 Apr 2013 12:26:08 +0000 Subject: [rhos-list] login after packstack installation Message-ID: Hi, I have successfully installed openstack on two nodes (running CentOS 6.3) using packstack ? 2012.2.2-0.2.dev211.el6 .noarch.rpm ? package. I got no error, but no keystonerc_admin file was created on my controller node. So I don?t know how I could login to the dashboard. Where can I find the credentials that where used by packstack? I have also successfully installed a controller node (also running CentOS 6.3) manually using the RedHat documentation Rev. 1.0-28 from 03/27/2013. Now I wan?t to install a compute node but I don?t know how to start? What should be installed on the compute node ? only Nova and Cinder? I?m using Nova-network and not Quantum for the moment. Should I complete the keystone service-catalog on the controller with the new services on the compute node? If someone has a link or a doc for compute node installation it will be very helpful for me. Thanks a lot, Cheers, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Tue Apr 9 12:44:33 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 9 Apr 2013 12:44:33 +0000 Subject: [rhos-list] login after packstack installation In-Reply-To: <51640914.1020404@redhat.com> References: <51640914.1020404@redhat.com> Message-ID: Oups sorry I found it I was looking at the wrong server? Sorry for this And for the compute node installation how shoud I start ? From: Frederik Bijlsma [mailto:fbijlsma at redhat.com] Sent: mardi 9 avril 2013 14:27 To: Vogel Nicolas Subject: Re: [rhos-list] login after packstack installation You should have a file .keystonerc in the /root directory. On 04/09/2013 02:26 PM, Vogel Nicolas wrote: Hi, I have successfully installed openstack on two nodes (running CentOS 6.3) using packstack ? 2012.2.2-0.2.dev211.el6 .noarch.rpm ? package. I got no error, but no keystonerc_admin file was created on my controller node. So I don?t know how I could login to the dashboard. Where can I find the credentials that where used by packstack? I have also successfully installed a controller node (also running CentOS 6.3) manually using the RedHat documentation Rev. 1.0-28 from 03/27/2013. Now I wan?t to install a compute node but I don?t know how to start? What should be installed on the compute node ? only Nova and Cinder? I?m using Nova-network and not Quantum for the moment. Should I complete the keystone service-catalog on the controller with the new services on the compute node? If someone has a link or a doc for compute node installation it will be very helpful for me. Thanks a lot, Cheers, Nicolas. _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Tue Apr 9 13:37:15 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 9 Apr 2013 13:37:15 +0000 Subject: [rhos-list] login after packstack installation In-Reply-To: References: <51640914.1020404@redhat.com> Message-ID: Yes I have done that with packstack and it works. For the moment I see that keystone, glance and nova are installed on the compute node. But why not Cinder? I already created a VG ?cinder-volumes? on the compute node but Cinder was not installed on it, only on the controller. My question for the compute node was for a manual installation. With Packstack I don?t know what was installed on the compute node and what was not. And for debug and my own understanding from Openstack I would be able to install compute nodes manually so I can make my cloud scale when I get more experience with it. From: cloud-bounces at lists.fedoraproject.org [mailto:cloud-bounces at lists.fedoraproject.org] On Behalf Of Sandro "red" Mathys Sent: mardi 9 avril 2013 15:13 To: Fedora Cloud SIG Subject: Re: [rhos-list] login after packstack installation On Tue, Apr 9, 2013 at 2:44 PM, Vogel Nicolas > wrote: Oups sorry I found it I was looking at the wrong server? Sorry for this And for the compute node installation how shoud I start ? Specify the compute nodes at CONFIG_NOVA_COMPUTE_HOSTS which takes a comma separated list. From: Frederik Bijlsma [mailto:fbijlsma at redhat.com] Sent: mardi 9 avril 2013 14:27 To: Vogel Nicolas Subject: Re: [rhos-list] login after packstack installation You should have a file .keystonerc in the /root directory. On 04/09/2013 02:26 PM, Vogel Nicolas wrote: Hi, I have successfully installed openstack on two nodes (running CentOS 6.3) using packstack ? 2012.2.2-0.2.dev211.el6 .noarch.rpm ? package. I got no error, but no keystonerc_admin file was created on my controller node. So I don?t know how I could login to the dashboard. Where can I find the credentials that where used by packstack? I have also successfully installed a controller node (also running CentOS 6.3) manually using the RedHat documentation Rev. 1.0-28 from 03/27/2013. Now I wan?t to install a compute node but I don?t know how to start? What should be installed on the compute node ? only Nova and Cinder? I?m using Nova-network and not Quantum for the moment. Should I complete the keystone service-catalog on the controller with the new services on the compute node? If someone has a link or a doc for compute node installation it will be very helpful for me. Thanks a lot, Cheers, Nicolas. _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list _______________________________________________ cloud mailing list cloud at lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Tue Apr 9 14:09:21 2013 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Tue, 9 Apr 2013 16:09:21 +0200 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: <515D52AC.3070600@redhat.com> References: <515D3B1B.8060801@cern.ch> <515D495F.9020609@redhat.com> <515D52AC.3070600@redhat.com> Message-ID: <51642111.6030605@cern.ch> Hi Derek, > I havn't seen this problem before, what version of puppet are you using ? I used puppet 2.6.18. Any recommendation on the version to use ? >> I didn't hit this one. After fixing the keystone issue, OpenStack >> installed fine. I added the grizzly epel repo manually rather than let >> Packstack do it (I didn't know it could do it for me :) ), but I think >> this doesn't make any difference wrt your issue. >> >> I used commit 34300029ff, so maybe you could try `git checkout >> 34300029ff && git submodule init && git submodule update`. >> I have a different issue now :-) ERROR : Error during puppet run : err: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]: Could not evaluate: undefined method `[]=' for nil:NilClass Please check log file /var/tmp/packstack/20130409-100330-ElHgxT/openstack-setup.log for more information Maybe I am using the wrong puppet version. -- Thomas. From red at fedoraproject.org Tue Apr 9 14:36:13 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Tue, 9 Apr 2013 16:36:13 +0200 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: <51642111.6030605@cern.ch> References: <515D3B1B.8060801@cern.ch> <515D495F.9020609@redhat.com> <515D52AC.3070600@redhat.com> <51642111.6030605@cern.ch> Message-ID: On Tue, Apr 9, 2013 at 4:09 PM, Thomas Oulevey wrote: > Hi Derek, > >> I havn't seen this problem before, what version of puppet are you using ? >> > I used puppet 2.6.18. Any recommendation on the version to use ? > >> I didn't hit this one. After fixing the keystone issue, OpenStack >>> installed fine. I added the grizzly epel repo manually rather than let >>> Packstack do it (I didn't know it could do it for me :) ), but I think >>> this doesn't make any difference wrt your issue. >>> >>> I used commit 34300029ff, so maybe you could try `git checkout >>> 34300029ff && git submodule init && git submodule update`. >>> >>> I have a different issue now :-) > > ERROR : Error during puppet run : err: /Stage[main]/Nova::Api/Nova::** > Generic_service[api]/Service[**nova-api]: Could not evaluate: undefined > method `[]=' for nil:NilClass > Please check log file /var/tmp/packstack/20130409-** > 100330-ElHgxT/openstack-setup.**log for more information > > > Maybe I am using the wrong puppet version. I've just seen the same issue when using EPEL and the Grizzly side repo (i.e. no RHOS). But what actually helped was downgrading Puppet from puppet-2.6.18-2.el6 to puppet-2.6.18-1.el6 [1]. The difference between the two, according to the changelog, is a backport of some race condition fix only. Going to open a bug against Puppet in EPEL. [1] http://kojipkgs.fedoraproject.org//packages/puppet/2.6.18/1.el6/noarch/puppet-2.6.18-1.el6.noarch.rpm -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Tue Apr 9 14:44:33 2013 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Tue, 9 Apr 2013 16:44:33 +0200 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: References: <515D3B1B.8060801@cern.ch> <515D495F.9020609@redhat.com> <515D52AC.3070600@redhat.com> <51642111.6030605@cern.ch> Message-ID: <51642951.3040909@cern.ch> Hi, > I've just seen the same issue when using EPEL and the Grizzly side > repo (i.e. no RHOS). But what actually helped was downgrading Puppet > from puppet-2.6.18-2.el6 to puppet-2.6.18-1.el6 [1]. The difference > between the two, according to the changelog, is a backport of some > race condition fix only. > > Going to open a bug against Puppet in EPEL. > > [1] > http://kojipkgs.fedoraproject.org//packages/puppet/2.6.18/1.el6/noarch/puppet-2.6.18-1.el6.noarch.rpm Correct, downgrading, fixes this issue. Thanks! Thomas. From red at fedoraproject.org Tue Apr 9 14:46:29 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Tue, 9 Apr 2013 16:46:29 +0200 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: <51642951.3040909@cern.ch> References: <515D3B1B.8060801@cern.ch> <515D495F.9020609@redhat.com> <515D52AC.3070600@redhat.com> <51642111.6030605@cern.ch> <51642951.3040909@cern.ch> Message-ID: On Tue, Apr 9, 2013 at 4:44 PM, Thomas Oulevey wrote: > Hi, > > I've just seen the same issue when using EPEL and the Grizzly side repo >> (i.e. no RHOS). But what actually helped was downgrading Puppet from >> puppet-2.6.18-2.el6 to puppet-2.6.18-1.el6 [1]. The difference between the >> two, according to the changelog, is a backport of some race condition fix >> only. >> >> Going to open a bug against Puppet in EPEL. >> > In case someone wants to follow the progress or possibly add further details: https://bugzilla.redhat.com/show_bug.cgi?id=950066 > >> [1] http://kojipkgs.fedoraproject.**org//packages/puppet/2.6.18/1.** >> el6/noarch/puppet-2.6.18-1.**el6.noarch.rpm >> > > Correct, downgrading, fixes this issue. Thanks! > > > Thomas. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zilli at cern.ch Tue Apr 9 14:53:46 2013 From: stefano.zilli at cern.ch (Stefano Zilli) Date: Tue, 9 Apr 2013 14:53:46 +0000 Subject: [rhos-list] Grizzly, Packstack and 6.4 In-Reply-To: References: <515D3B1B.8060801@cern.ch> <515D495F.9020609@redhat.com> <515D52AC.3070600@redhat.com> <51642111.6030605@cern.ch> <51642951.3040909@cern.ch> Message-ID: <2DDF0FACD8B3CB4193010D75BFA89473AEA82F@CERNXCHG12.cern.ch> Hi, I think something similar is already tracked with https://bugzilla.redhat.com/show_bug.cgi?id=949549 Stefano. > > On Tue, Apr 9, 2013 at 4:44 PM, Thomas Oulevey wrote: > Hi, > > I've just seen the same issue when using EPEL and the Grizzly side repo (i.e. no RHOS). But what actually helped was downgrading Puppet from puppet-2.6.18-2.el6 to puppet-2.6.18-1.el6 [1]. The difference between the two, according to the changelog, is a backport of some race condition fix only. > > Going to open a bug against Puppet in EPEL. > > In case someone wants to follow the progress or possibly add further details: > https://bugzilla.redhat.com/show_bug.cgi?id=950066 > > > [1] http://kojipkgs.fedoraproject.org//packages/puppet/2.6.18/1.el6/noarch/puppet-2.6.18-1.el6.noarch.rpm > > Correct, downgrading, fixes this issue. Thanks! > > > Thomas. > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4310 bytes Desc: not available URL: From rbryant at redhat.com Wed Apr 10 14:21:05 2013 From: rbryant at redhat.com (Russell Bryant) Date: Wed, 10 Apr 2013 10:21:05 -0400 Subject: [rhos-list] Actions command not working In-Reply-To: <1364882945.ec26fc2eb2b75aece19c70392dc744c2@mail.in.com> References: <1364882945.ec26fc2eb2b75aece19c70392dc744c2@mail.in.com> Message-ID: <51657551.1050605@redhat.com> On 04/02/2013 02:09 AM, Kumar Vaibhav wrote: > Hi, > > I am trying to use 'nova actions ' but it gives > > ERROR: n/a (HTTP 404) > > in nova/api.log I get this > > 2013-04-02 11:35:26 INFO nova.osapi_compute.wsgi.server > [req-05718f39-7d7e-4653-afe7-7b223aa5f502 nirbhayc compd] 90.3.26.52 - - > [02/Apr/2013 11:35:26] "GET > /v2/compd/servers/f301d675-20ae-42c3-8ee3-9ae6f5467a90/actions HTTP/1.1" > 404 176 0.078454 I just ran into the same thing recently. As far as I can tell, it doesn't exist at all and needs to just be removed from the nova client. Please file a bug on this. > And as mentioned on this page http://api.openstack.org/api-ref.html > > function os-instance-actions is also giving the same error. That is a feature that only exists in Grizzly and above. The current version of RHOS (based on Folsom) does not have it. -- Russell Bryant From rich.minton at lmco.com Wed Apr 10 16:04:00 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 10 Apr 2013 16:04:00 +0000 Subject: [rhos-list] Problem with Snapshots. Message-ID: I'm having an issue when I create an instance from a Snapshot image... The instance is created from the snapshot fine but in the console log I see errors when trying to get to the metadata service, host is unreachable. 2013-04-10 11:53:38,716 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: url error [[Errno 101] Network is unreachable] I also get an error regarding eth0: Bringing up interface eth0: Device eth0 does not seem to be present, delaying initialization. [FAILED] Cloud-init starts ok and I see another entry regarding networking: ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth1 | False | . | . | fa:16:3e:6f:24:24 | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! It looks like there is an eth1 but no eth0. Looking further I found two entries in /etc/udev/rules.d/70-persisten-net.rules for eth0 and eth1. If I delete both lines and reboot the interfaces are created properly and the the instance can reach the metadata service and all works as it should. My question is, is there a way to keep the second NIC from being created and have eth0 get the new hardware address? Should I delete the 70-persisten-net.rules file before creating the snapshot or is there an easier way? Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Apr 10 16:22:30 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 10 Apr 2013 12:22:30 -0400 Subject: [rhos-list] Problem with Snapshots. In-Reply-To: References: Message-ID: <516591C6.9040808@redhat.com> On 04/10/2013 12:04 PM, Minton, Rich wrote: > I?m having an issue when I create an instance from a Snapshot image? > > > > The instance is created from the snapshot fine but in the console log I > see errors when trying to get to the metadata service, host is unreachable. > > > > 2013-04-10 11:53:38,716 - url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed > [0/120s]: url error [[Errno 101] Network is unreachable] > > > > I also get an error regarding eth0: > > Bringing up interface eth0: Device eth0 does not seem to be present, > delaying initialization. [FAILED] > > > > Cloud-init starts ok and I see another entry regarding networking: > > > > ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ > > ci-info: +--------+-------+-----------+-----------+-------------------+ > > ci-info: | Device | Up | Address | Mask | Hw-Address | > > ci-info: +--------+-------+-----------+-----------+-------------------+ > > ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | > > ci-info: | eth1 | False | . | . | fa:16:3e:6f:24:24 | > > ci-info: +--------+-------+-----------+-----------+-------------------+ > > ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info > failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > > > It looks like there is an eth1 but no eth0. Looking further I found two > entries in /etc/udev/rules.d/70-persisten-net.rules for eth0 and eth1. > If I delete both lines and reboot the interfaces are created properly > and the the instance can reach the metadata service and all works as it > should. > > > > My question is, is there a way to keep the second NIC from being created > and have eth0 get the new hardware address? Should I delete the > 70-persisten-net.rules file before creating the snapshot or is there an > easier way? Did you 'sysprep' the guest image before importing it into glance? The udev rule you mention above needs to be removed/disabled from guest images in order to prevent this I think. There is a tool called virt-sysprep that you can run on guest images prior to import and one of the default things it does is to remove that troublesome udev rule I've added the libguestfs maintainer (which virt-sysprep is part of) to provide any add'l insight here. SteveG/ChrisN, do you guys have usage of virt-sysprep for RHOS images included in either a kbase or the formal docs? Cheers, Perry From pmyers at redhat.com Wed Apr 10 16:37:46 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 10 Apr 2013 12:37:46 -0400 Subject: [rhos-list] Problem with Snapshots. In-Reply-To: <516591C6.9040808@redhat.com> References: <516591C6.9040808@redhat.com> Message-ID: <5165955A.5040709@redhat.com> On 04/10/2013 12:22 PM, Perry Myers wrote: > On 04/10/2013 12:04 PM, Minton, Rich wrote: >> I?m having an issue when I create an instance from a Snapshot image? >> >> >> >> The instance is created from the snapshot fine but in the console log I >> see errors when trying to get to the metadata service, host is unreachable. >> >> >> >> 2013-04-10 11:53:38,716 - url_helper.py[WARNING]: Calling >> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed >> [0/120s]: url error [[Errno 101] Network is unreachable] >> >> >> >> I also get an error regarding eth0: >> >> Bringing up interface eth0: Device eth0 does not seem to be present, >> delaying initialization. [FAILED] >> >> >> >> Cloud-init starts ok and I see another entry regarding networking: >> >> >> >> ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ >> >> ci-info: +--------+-------+-----------+-----------+-------------------+ >> >> ci-info: | Device | Up | Address | Mask | Hw-Address | >> >> ci-info: +--------+-------+-----------+-----------+-------------------+ >> >> ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | >> >> ci-info: | eth1 | False | . | . | fa:16:3e:6f:24:24 | >> >> ci-info: +--------+-------+-----------+-----------+-------------------+ >> >> ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info >> failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! >> >> >> >> It looks like there is an eth1 but no eth0. Looking further I found two >> entries in /etc/udev/rules.d/70-persisten-net.rules for eth0 and eth1. >> If I delete both lines and reboot the interfaces are created properly >> and the the instance can reach the metadata service and all works as it >> should. >> >> >> >> My question is, is there a way to keep the second NIC from being created >> and have eth0 get the new hardware address? Should I delete the >> 70-persisten-net.rules file before creating the snapshot or is there an >> easier way? > > Did you 'sysprep' the guest image before importing it into glance? > > The udev rule you mention above needs to be removed/disabled from guest > images in order to prevent this I think. > > There is a tool called virt-sysprep that you can run on guest images > prior to import and one of the default things it does is to remove that > troublesome udev rule > > I've added the libguestfs maintainer (which virt-sysprep is part of) to > provide any add'l insight here. > > SteveG/ChrisN, do you guys have usage of virt-sysprep for RHOS images > included in either a kbase or the formal docs? One add'l non-intuitive thing that I should mention virt-sysprep is a command line tool that is in the libguestfs-tools paclage So on RHEL, just doing yum install libguestfs-tools should get you the binary and it has a pretty comprehensive man page From rjones at redhat.com Wed Apr 10 17:08:30 2013 From: rjones at redhat.com (Richard W.M. Jones) Date: Wed, 10 Apr 2013 18:08:30 +0100 Subject: [rhos-list] Problem with Snapshots. In-Reply-To: <516591C6.9040808@redhat.com> References: <516591C6.9040808@redhat.com> Message-ID: <20130410170829.GV1461@rhmail.home.annexia.org> On Wed, Apr 10, 2013 at 12:22:30PM -0400, Perry Myers wrote: > Did you 'sysprep' the guest image before importing it into glance? > > The udev rule you mention above needs to be removed/disabled from guest > images in order to prevent this I think. > > There is a tool called virt-sysprep that you can run on guest images > prior to import and one of the default things it does is to remove that > troublesome udev rule > > I've added the libguestfs maintainer (which virt-sysprep is part of) to > provide any add'l insight here. The man page for virt-sysprep is here: http://libguestfs.org/virt-sysprep.1.html Note this tool was completely rewritten from scratch in libguestfs 1.18. If you are using RHEL 6 then unfortunately you'll have the old tool which wasn't nearly so capable. To make ad-hoc changes to the persistent-net.rules file I would recommend using guestfish or virt-edit (-e option). For example: virt-edit -a /path/to/your/guest \ /etc/udev/rules.d/70-persistent-net.rules \ -e '$_ = "" if /52:54:00:01:02:03/' would delete only the persistent rule matching the given MAC address. Or: virt-edit -a /path/to/your/guest \ /etc/udev/rules.d/70-persistent-net.rules \ -e '$_ = "" if /NAME=.*eth1/' would delete only the eth1 rule. You mustn't do this on live guests. http://libguestfs.org/virt-edit.1.html#non-interactive-editing Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v From rjones at redhat.com Wed Apr 10 17:17:01 2013 From: rjones at redhat.com (Richard W.M. Jones) Date: Wed, 10 Apr 2013 18:17:01 +0100 Subject: [rhos-list] Problem with Snapshots. In-Reply-To: References: Message-ID: <20130410171701.GA26195@rhmail.home.annexia.org> On Wed, Apr 10, 2013 at 04:04:00PM +0000, Minton, Rich wrote: > My question is, is there a way to keep the second NIC from being > created and have eth0 get the new hardware address? Should I delete > the 70-persistent-net.rules file before creating the snapshot or is > there an easier way? To just delete the file you can do: guestfish -a /path/to/guest -i rm /etc/udev/rules.d/70-persistent-net.rules See: http://libguestfs.org/guestfs-recipes.1.html#delete-a-file-or-other-simple-file-operations- Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW From sgordon at redhat.com Wed Apr 10 20:50:07 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 10 Apr 2013 16:50:07 -0400 (EDT) Subject: [rhos-list] Problem with Snapshots. In-Reply-To: <516591C6.9040808@redhat.com> References: <516591C6.9040808@redhat.com> Message-ID: <665679106.1993070.1365627007598.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Perry Myers" > To: "Rich Minton" , "Richard Jones" , "Chris Negus" , > "Steve Gordon" > Cc: rhos-list at redhat.com > Sent: Thursday, April 11, 2013 2:22:30 AM > Subject: Re: [rhos-list] Problem with Snapshots. > > On 04/10/2013 12:04 PM, Minton, Rich wrote: > > I?m having an issue when I create an instance from a Snapshot image? > > > > > > > > The instance is created from the snapshot fine but in the console log I > > see errors when trying to get to the metadata service, host is unreachable. > > > > > > > > 2013-04-10 11:53:38,716 - url_helper.py[WARNING]: Calling > > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed > > [0/120s]: url error [[Errno 101] Network is unreachable] > > > > > > > > I also get an error regarding eth0: > > > > Bringing up interface eth0: Device eth0 does not seem to be present, > > delaying initialization. [FAILED] > > > > > > > > Cloud-init starts ok and I see another entry regarding networking: > > > > > > > > ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ > > > > ci-info: +--------+-------+-----------+-----------+-------------------+ > > > > ci-info: | Device | Up | Address | Mask | Hw-Address | > > > > ci-info: +--------+-------+-----------+-----------+-------------------+ > > > > ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | > > > > ci-info: | eth1 | False | . | . | fa:16:3e:6f:24:24 | > > > > ci-info: +--------+-------+-----------+-----------+-------------------+ > > > > ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info > > failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > > > > > > > It looks like there is an eth1 but no eth0. Looking further I found two > > entries in /etc/udev/rules.d/70-persisten-net.rules for eth0 and eth1. > > If I delete both lines and reboot the interfaces are created properly > > and the the instance can reach the metadata service and all works as it > > should. > > > > > > > > My question is, is there a way to keep the second NIC from being created > > and have eth0 get the new hardware address? Should I delete the > > 70-persisten-net.rules file before creating the snapshot or is there an > > easier way? > > Did you 'sysprep' the guest image before importing it into glance? > > The udev rule you mention above needs to be removed/disabled from guest > images in order to prevent this I think. > > There is a tool called virt-sysprep that you can run on guest images > prior to import and one of the default things it does is to remove that > troublesome udev rule > > I've added the libguestfs maintainer (which virt-sysprep is part of) to > provide any add'l insight here. > > SteveG/ChrisN, do you guys have usage of virt-sysprep for RHOS images > included in either a kbase or the formal docs? > > Cheers, > > Perry > Hi Perry, Yes, it's the last step in the procedure I wrote regarding using Oz to build the images: https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/2/html/Getting_Started_Guide/chap-Deploying_Image_Services.html#sect-Building_Images_using_Oz There is also a more prominent admonition on the next page for users who may obtain/build their images via other means: https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/2/html/Getting_Started_Guide/ch09s02.html Thanks, Steve From nicolas.vogel at heig-vd.ch Thu Apr 11 13:59:00 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Thu, 11 Apr 2013 13:59:00 +0000 Subject: [rhos-list] how to install more compute nodes Message-ID: Hi, I just finished to install a controller node with the latest official RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). I?m using nova-network with FlatDHCP and not quantum in this test. Now I wan?t to extend my cloud with more compute nodes but i?m not really sure about the different services who must be installed on each compute node. In a first time, I wan?t to centralize all my services on the controller, the computes nodes must give me just more CPU and more disk space. So I think that basically I just need to install nova-compute, nova-api and cinder on the compute nodes is that right? How do I configure this services on the compute node so they know that Keystone, Glance, etc.. are on the controller? If someone has nova.conf and keystone.conf example from it would be very helpful. Thanks in advance, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Thu Apr 11 14:23:11 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 11 Apr 2013 10:23:11 -0400 Subject: [rhos-list] how to install more compute nodes In-Reply-To: References: Message-ID: <5166C74F.5070502@redhat.com> On 04/11/2013 09:59 AM, Vogel Nicolas wrote: > Hi, > > > > I just finished to install a controller node with the latest official > RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). > > I?m using nova-network with FlatDHCP and not quantum in this test. > > Now I wan?t to extend my cloud with more compute nodes but i?m not > really sure about the different services who must be installed on each > compute node. > > In a first time, I wan?t to centralize all my services on the > controller, the computes nodes must give me just more CPU and more disk > space. So I think that basically I just need to install nova-compute, > nova-api and cinder on the compute nodes is that right? nova-compute and cinder, but I do not think you need additional nova-api on the add'l compute nodes > How do I configure this services on the compute node so they know that > Keystone, Glance, etc.. are on the controller? Jacob, do you have a writeup around this area? Taking an existing RHOS install and adding compute nodes to it? Perry > If someone has nova.conf and keystone.conf example from it would be very > helpful. > > > > Thanks in advance, > > > > Nicolas. From jliberma at redhat.com Thu Apr 11 14:51:10 2013 From: jliberma at redhat.com (Jacob Liberman) Date: Thu, 11 Apr 2013 09:51:10 -0500 Subject: [rhos-list] how to install more compute nodes In-Reply-To: <5166C74F.5070502@redhat.com> References: <5166C74F.5070502@redhat.com> Message-ID: <5166CDDE.9010602@redhat.com> On 04/11/2013 09:23 AM, Perry Myers wrote: > On 04/11/2013 09:59 AM, Vogel Nicolas wrote: >> Hi, >> >> >> >> I just finished to install a controller node with the latest official >> RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). >> >> I?m using nova-network with FlatDHCP and not quantum in this test. >> >> Now I wan?t to extend my cloud with more compute nodes but i?m not >> really sure about the different services who must be installed on each >> compute node. >> >> In a first time, I wan?t to centralize all my services on the >> controller, the computes nodes must give me just more CPU and more disk >> space. So I think that basically I just need to install nova-compute, >> nova-api and cinder on the compute nodes is that right? > nova-compute and cinder, but I do not think you need additional nova-api > on the add'l compute nodes nova-compute, nova-network (if you want multi_host/HA networking) and nova-metadata-api if you are passing any customizations to the instances during boot you can run cinder-volumes on all nodes but there are some issues. better to use a centralized cinder server or cinder backed by a distributed file system. you specify the other service endpoints in the compute node's nova.conf. > >> How do I configure this services on the compute node so they know that >> Keystone, Glance, etc.. are on the controller? > Jacob, do you have a writeup around this area? Taking an existing RHOS > install and adding compute nodes to it? yes, it will be publicly available in the next few weeks. i am happy to answer specific questions before the document is available. > Perry > >> If someone has nova.conf and keystone.conf example from it would be very >> helpful. >> >> Here is a nova.conf from a compute node. The controller IP (glance, keystone, cinder, nova-scheduler) is 10.16.37.100 The compute node IP (nova-compute,nova-network) is 10.16.137.102 the metadata_hostvalue may differ depending on what you are running where [DEFAULT] verbose=false connection_type=libvirt sql_connection=mysql://nova:9f63b4ec6b074b1c at 10.16.137.100/nova state_path=/var/lib/nova lock_path=/var/lib/nova/tmp glance_api_servers=10.16.137.100:9292 metadata_host=10.16.137.100 network_manager=nova.network.manager.FlatDHCPManager rootwrap_config=/etc/nova/rootwrap.conf service_down_time=60 volume_api_class=nova.volume.cinder.API auth_strategy=keystone compute_driver=libvirt.LibvirtDriver public_interface=eth0 dhcpbridge=/usr/bin/nova-dhcpbridge flat_network_bridge=br100 flat_injected=false flat_interface=eth1 floating_range=10.16.143.108/30 fixed_range=172.16.2.0/24 network_host=10.16.137.102 force_dhcp_release=true dhcp_domain=novalocal logdir=/var/log/nova rpc_backend=nova.openstack.common.rpc.impl_qpid rabbit_host=localhost qpid_hostname=10.16.137.100 libvirt_type=kvm libvirt_inject_partition=-1 novncproxy_base_url=http://10.16.137.100:6080/vnc_auto.html vncserver_listen=10.16.137.102 vncserver_proxyclient_address=10.16.137.102 vnc_enabled=true image_service=nova.image.glance.GlanceImageService multi_host = True [trusted_computing] [keystone_authtoken] thanks, jacob >> >> Thanks in advance, >> >> >> >> Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Thu Apr 11 15:19:17 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Thu, 11 Apr 2013 15:19:17 +0000 Subject: [rhos-list] how to install more compute nodes In-Reply-To: <5166CDDE.9010602@redhat.com> References: <5166C74F.5070502@redhat.com> <5166CDDE.9010602@redhat.com> Message-ID: Thanks Jacob for this infos ! I think I must also create keystonerc_admin and keystonerc_username file on my controller and source it on demand to make my install right? Do I also have to modify or complete something on my controller node so that he knows about the new compute node? Cheers, Nicolas. From: Jacob Liberman [mailto:jliberma at redhat.com] Sent: jeudi 11 avril 2013 16:51 To: Perry Myers Cc: Vogel Nicolas; 'rhos-list at redhat.com' Subject: Re: [rhos-list] how to install more compute nodes On 04/11/2013 09:23 AM, Perry Myers wrote: On 04/11/2013 09:59 AM, Vogel Nicolas wrote: Hi, I just finished to install a controller node with the latest official RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). I?m using nova-network with FlatDHCP and not quantum in this test. Now I wan?t to extend my cloud with more compute nodes but i?m not really sure about the different services who must be installed on each compute node. In a first time, I wan?t to centralize all my services on the controller, the computes nodes must give me just more CPU and more disk space. So I think that basically I just need to install nova-compute, nova-api and cinder on the compute nodes is that right? nova-compute and cinder, but I do not think you need additional nova-api on the add'l compute nodes nova-compute, nova-network (if you want multi_host/HA networking) and nova-metadata-api if you are passing any customizations to the instances during boot you can run cinder-volumes on all nodes but there are some issues. better to use a centralized cinder server or cinder backed by a distributed file system. you specify the other service endpoints in the compute node's nova.conf. How do I configure this services on the compute node so they know that Keystone, Glance, etc.. are on the controller? Jacob, do you have a writeup around this area? Taking an existing RHOS install and adding compute nodes to it? yes, it will be publicly available in the next few weeks. i am happy to answer specific questions before the document is available. Perry If someone has nova.conf and keystone.conf example from it would be very helpful. Here is a nova.conf from a compute node. The controller IP (glance, keystone, cinder, nova-scheduler) is 10.16.37.100 The compute node IP (nova-compute,nova-network) is 10.16.137.102 the metadata_hostvalue may differ depending on what you are running where [DEFAULT] verbose=false connection_type=libvirt sql_connection=mysql://nova:9f63b4ec6b074b1c at 10.16.137.100/nova state_path=/var/lib/nova lock_path=/var/lib/nova/tmp glance_api_servers=10.16.137.100:9292 metadata_host=10.16.137.100 network_manager=nova.network.manager.FlatDHCPManager rootwrap_config=/etc/nova/rootwrap.conf service_down_time=60 volume_api_class=nova.volume.cinder.API auth_strategy=keystone compute_driver=libvirt.LibvirtDriver public_interface=eth0 dhcpbridge=/usr/bin/nova-dhcpbridge flat_network_bridge=br100 flat_injected=false flat_interface=eth1 floating_range=10.16.143.108/30 fixed_range=172.16.2.0/24 network_host=10.16.137.102 force_dhcp_release=true dhcp_domain=novalocal logdir=/var/log/nova rpc_backend=nova.openstack.common.rpc.impl_qpid rabbit_host=localhost qpid_hostname=10.16.137.100 libvirt_type=kvm libvirt_inject_partition=-1 novncproxy_base_url=http://10.16.137.100:6080/vnc_auto.html vncserver_listen=10.16.137.102 vncserver_proxyclient_address=10.16.137.102 vnc_enabled=true image_service=nova.image.glance.GlanceImageService multi_host = True [trusted_computing] [keystone_authtoken] thanks, jacob Thanks in advance, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jliberma at redhat.com Thu Apr 11 15:27:35 2013 From: jliberma at redhat.com (Jacob Liberman) Date: Thu, 11 Apr 2013 10:27:35 -0500 Subject: [rhos-list] how to install more compute nodes In-Reply-To: References: <5166C74F.5070502@redhat.com> <5166CDDE.9010602@redhat.com> Message-ID: <5166D667.5030305@redhat.com> On 04/11/2013 10:19 AM, Vogel Nicolas wrote: > > Thanks Jacob for this infos ! > > I think I must also create keystonerc_admin and keystonerc_username > file on my controller and source it on demand to make my install right? > it depends on how you make the changes if you are using the command line tools then yes if you are using packstack the keystonerc will be installed automatically wherever you install the client tools you can also download the user environment vars from the horizon dashboard > Do I also have to modify or complete something on my controller node > so that he knows about the new compute node? > no, just start the services on the compute node. it will add itself if everything is configured correctly. you can verify with "nova-manage service list" > Cheers, > > Nicolas. > > *From:*Jacob Liberman [mailto:jliberma at redhat.com] > *Sent:* jeudi 11 avril 2013 16:51 > *To:* Perry Myers > *Cc:* Vogel Nicolas; 'rhos-list at redhat.com' > *Subject:* Re: [rhos-list] how to install more compute nodes > > On 04/11/2013 09:23 AM, Perry Myers wrote: > > On 04/11/2013 09:59 AM, Vogel Nicolas wrote: > > Hi, > > > > > > > > I just finished to install a controller node with the latest official > > RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). > > > > I?m using nova-network with FlatDHCP and not quantum in this test. > > > > Now I wan?t to extend my cloud with more compute nodes but i?m not > > really sure about the different services who must be installed on each > > compute node. > > > > In a first time, I wan?t to centralize all my services on the > > controller, the computes nodes must give me just more CPU and more disk > > space. So I think that basically I just need to install nova-compute, > > nova-api and cinder on the compute nodes is that right? > > > > nova-compute and cinder, but I do not think you need additional nova-api > > on the add'l compute nodes > > > nova-compute, nova-network (if you want multi_host/HA networking) and > nova-metadata-api if you are passing any customizations to the > instances during boot > > you can run cinder-volumes on all nodes but there are some issues. > better to use a centralized cinder server or cinder backed by a > distributed file system. > > you specify the other service endpoints in the compute node's nova.conf. > > > > > > How do I configure this services on the compute node so they know that > > Keystone, Glance, etc.. are on the controller? > > > > Jacob, do you have a writeup around this area? Taking an existing RHOS > > install and adding compute nodes to it? > > > yes, it will be publicly available in the next few weeks. > > i am happy to answer specific questions before the document is available. > > > > > Perry > > > > If someone has nova.conf and keystone.conf example from it would be very > > helpful. > > > > > > > > Here is a nova.conf from a compute node. > > The controller IP (glance, keystone, cinder, nova-scheduler) is > 10.16.37.100 > The compute node IP (nova-compute,nova-network) is 10.16.137.102 > > the metadata_hostvalue may differ depending on what you are running where > > > [DEFAULT] > verbose=false > connection_type=libvirt > sql_connection=mysql://nova:9f63b4ec6b074b1c at 10.16.137.100/nova > > state_path=/var/lib/nova > lock_path=/var/lib/nova/tmp > glance_api_servers=10.16.137.100:9292 > metadata_host=10.16.137.100 > network_manager=nova.network.manager.FlatDHCPManager > rootwrap_config=/etc/nova/rootwrap.conf > service_down_time=60 > volume_api_class=nova.volume.cinder.API > auth_strategy=keystone > compute_driver=libvirt.LibvirtDriver > public_interface=eth0 > dhcpbridge=/usr/bin/nova-dhcpbridge > flat_network_bridge=br100 > flat_injected=false > flat_interface=eth1 > floating_range=10.16.143.108/30 > fixed_range=172.16.2.0/24 > network_host=10.16.137.102 > force_dhcp_release=true > dhcp_domain=novalocal > logdir=/var/log/nova > rpc_backend=nova.openstack.common.rpc.impl_qpid > rabbit_host=localhost > qpid_hostname=10.16.137.100 > libvirt_type=kvm > libvirt_inject_partition=-1 > novncproxy_base_url=http://10.16.137.100:6080/vnc_auto.html > vncserver_listen=10.16.137.102 > vncserver_proxyclient_address=10.16.137.102 > vnc_enabled=true > image_service=nova.image.glance.GlanceImageService > multi_host = True > [trusted_computing] > [keystone_authtoken] > > > thanks, jacob > > > > > > > > Thanks in advance, > > > > > > > > Nicolas. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Fri Apr 12 08:34:37 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Fri, 12 Apr 2013 08:34:37 +0000 Subject: [rhos-list] how to install more compute nodes In-Reply-To: <5166D667.5030305@redhat.com> References: <5166C74F.5070502@redhat.com> <5166CDDE.9010602@redhat.com> <5166D667.5030305@redhat.com> Message-ID: Just 1 more question about floating IP : For my controller, I used the em1 interface for management purpose and for the communication between all openstack services (subnet 10.192.1.x./24). I configured then em1 as my ?flat interface? for my private VM subnet (192.168.x.x/24) and the demonetbr0 bridge. My em2 interface is the ?public interface?. Is that right? Em2 has already a fixed IP address and I want to allocate a floating IP from the same subnet to em2. Thanks, Nico. From: Jacob Liberman [mailto:jliberma at redhat.com] Sent: jeudi 11 avril 2013 17:28 To: Vogel Nicolas Cc: 'rhos-list at redhat.com'; 'Perry Myers' Subject: Re: [rhos-list] how to install more compute nodes On 04/11/2013 10:19 AM, Vogel Nicolas wrote: Thanks Jacob for this infos ! I think I must also create keystonerc_admin and keystonerc_username file on my controller and source it on demand to make my install right? it depends on how you make the changes if you are using the command line tools then yes if you are using packstack the keystonerc will be installed automatically wherever you install the client tools you can also download the user environment vars from the horizon dashboard Do I also have to modify or complete something on my controller node so that he knows about the new compute node? no, just start the services on the compute node. it will add itself if everything is configured correctly. you can verify with "nova-manage service list" Cheers, Nicolas. From: Jacob Liberman [mailto:jliberma at redhat.com] Sent: jeudi 11 avril 2013 16:51 To: Perry Myers Cc: Vogel Nicolas; 'rhos-list at redhat.com' Subject: Re: [rhos-list] how to install more compute nodes On 04/11/2013 09:23 AM, Perry Myers wrote: On 04/11/2013 09:59 AM, Vogel Nicolas wrote: Hi, I just finished to install a controller node with the latest official RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). I?m using nova-network with FlatDHCP and not quantum in this test. Now I wan?t to extend my cloud with more compute nodes but i?m not really sure about the different services who must be installed on each compute node. In a first time, I wan?t to centralize all my services on the controller, the computes nodes must give me just more CPU and more disk space. So I think that basically I just need to install nova-compute, nova-api and cinder on the compute nodes is that right? nova-compute and cinder, but I do not think you need additional nova-api on the add'l compute nodes nova-compute, nova-network (if you want multi_host/HA networking) and nova-metadata-api if you are passing any customizations to the instances during boot you can run cinder-volumes on all nodes but there are some issues. better to use a centralized cinder server or cinder backed by a distributed file system. you specify the other service endpoints in the compute node's nova.conf. How do I configure this services on the compute node so they know that Keystone, Glance, etc.. are on the controller? Jacob, do you have a writeup around this area? Taking an existing RHOS install and adding compute nodes to it? yes, it will be publicly available in the next few weeks. i am happy to answer specific questions before the document is available. Perry If someone has nova.conf and keystone.conf example from it would be very helpful. Here is a nova.conf from a compute node. The controller IP (glance, keystone, cinder, nova-scheduler) is 10.16.37.100 The compute node IP (nova-compute,nova-network) is 10.16.137.102 the metadata_hostvalue may differ depending on what you are running where [DEFAULT] verbose=false connection_type=libvirt sql_connection=mysql://nova:9f63b4ec6b074b1c at 10.16.137.100/nova state_path=/var/lib/nova lock_path=/var/lib/nova/tmp glance_api_servers=10.16.137.100:9292 metadata_host=10.16.137.100 network_manager=nova.network.manager.FlatDHCPManager rootwrap_config=/etc/nova/rootwrap.conf service_down_time=60 volume_api_class=nova.volume.cinder.API auth_strategy=keystone compute_driver=libvirt.LibvirtDriver public_interface=eth0 dhcpbridge=/usr/bin/nova-dhcpbridge flat_network_bridge=br100 flat_injected=false flat_interface=eth1 floating_range=10.16.143.108/30 fixed_range=172.16.2.0/24 network_host=10.16.137.102 force_dhcp_release=true dhcp_domain=novalocal logdir=/var/log/nova rpc_backend=nova.openstack.common.rpc.impl_qpid rabbit_host=localhost qpid_hostname=10.16.137.100 libvirt_type=kvm libvirt_inject_partition=-1 novncproxy_base_url=http://10.16.137.100:6080/vnc_auto.html vncserver_listen=10.16.137.102 vncserver_proxyclient_address=10.16.137.102 vnc_enabled=true image_service=nova.image.glance.GlanceImageService multi_host = True [trusted_computing] [keystone_authtoken] thanks, jacob Thanks in advance, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jliberma at redhat.com Fri Apr 12 13:36:20 2013 From: jliberma at redhat.com (Jacob Liberman) Date: Fri, 12 Apr 2013 08:36:20 -0500 Subject: [rhos-list] how to install more compute nodes In-Reply-To: References: <5166C74F.5070502@redhat.com> <5166CDDE.9010602@redhat.com> <5166D667.5030305@redhat.com> Message-ID: <51680DD4.6000406@redhat.com> On 04/12/2013 03:34 AM, Vogel Nicolas wrote: > > Just 1 more question about floating IP : > > For my controller, I used the em1 interface for management purpose and > for the communication between all openstack services (subnet > 10.192.1.x./24). I configured then em1 as my ?flat interface? for my > private VM subnet (192.168.x.x/24) and the demonetbr0 bridge. My em2 > interface is the ?public interface?. Is that right? Em2 has already a > fixed IP address and I want to allocate a floating IP from the same > subnet to em2. > yes, if you want the instance to have a public address, the floating IP can be on the same network as the public interface. > Thanks, > > Nico. > > *From:*Jacob Liberman [mailto:jliberma at redhat.com] > *Sent:* jeudi 11 avril 2013 17:28 > *To:* Vogel Nicolas > *Cc:* 'rhos-list at redhat.com'; 'Perry Myers' > *Subject:* Re: [rhos-list] how to install more compute nodes > > On 04/11/2013 10:19 AM, Vogel Nicolas wrote: > > Thanks Jacob for this infos ! > > I think I must also create keystonerc_admin and > keystonerc_username file on my controller and source it on demand > to make my install right? > > > it depends on how you make the changes > > if you are using the command line tools then yes > > if you are using packstack the keystonerc will be installed > automatically wherever you install the client tools > > you can also download the user environment vars from the horizon dashboard > > > Do I also have to modify or complete something on my controller > node so that he knows about the new compute node? > > > no, just start the services on the compute node. it will add itself if > everything is configured correctly. > > you can verify with "nova-manage service list" > > > Cheers, > > Nicolas. > > *From:*Jacob Liberman [mailto:jliberma at redhat.com] > *Sent:* jeudi 11 avril 2013 16:51 > *To:* Perry Myers > *Cc:* Vogel Nicolas; 'rhos-list at redhat.com > ' > *Subject:* Re: [rhos-list] how to install more compute nodes > > On 04/11/2013 09:23 AM, Perry Myers wrote: > > On 04/11/2013 09:59 AM, Vogel Nicolas wrote: > > Hi, > > > > > > > > I just finished to install a controller node with the latest official > > RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). > > > > I?m using nova-network with FlatDHCP and not quantum in this test. > > > > Now I wan?t to extend my cloud with more compute nodes but i?m not > > really sure about the different services who must be installed on each > > compute node. > > > > In a first time, I wan?t to centralize all my services on the > > controller, the computes nodes must give me just more CPU and more disk > > space. So I think that basically I just need to install nova-compute, > > nova-api and cinder on the compute nodes is that right? > > > > nova-compute and cinder, but I do not think you need additional nova-api > > on the add'l compute nodes > > > nova-compute, nova-network (if you want multi_host/HA networking) > and nova-metadata-api if you are passing any customizations to the > instances during boot > > you can run cinder-volumes on all nodes but there are some issues. > better to use a centralized cinder server or cinder backed by a > distributed file system. > > you specify the other service endpoints in the compute node's > nova.conf. > > > > > > > How do I configure this services on the compute node so they know that > > Keystone, Glance, etc.. are on the controller? > > > > Jacob, do you have a writeup around this area? Taking an existing RHOS > > install and adding compute nodes to it? > > > yes, it will be publicly available in the next few weeks. > > i am happy to answer specific questions before the document is > available. > > > > > > Perry > > > > If someone has nova.conf and keystone.conf example from it would be very > > helpful. > > > > > > > > Here is a nova.conf from a compute node. > > The controller IP (glance, keystone, cinder, nova-scheduler) is > 10.16.37.100 > The compute node IP (nova-compute,nova-network) is 10.16.137.102 > > the metadata_hostvalue may differ depending on what you are > running where > > > [DEFAULT] > verbose=false > connection_type=libvirt > sql_connection=mysql://nova:9f63b4ec6b074b1c at 10.16.137.100/nova > > state_path=/var/lib/nova > lock_path=/var/lib/nova/tmp > glance_api_servers=10.16.137.100:9292 > metadata_host=10.16.137.100 > network_manager=nova.network.manager.FlatDHCPManager > rootwrap_config=/etc/nova/rootwrap.conf > service_down_time=60 > volume_api_class=nova.volume.cinder.API > auth_strategy=keystone > compute_driver=libvirt.LibvirtDriver > public_interface=eth0 > dhcpbridge=/usr/bin/nova-dhcpbridge > flat_network_bridge=br100 > flat_injected=false > flat_interface=eth1 > floating_range=10.16.143.108/30 > fixed_range=172.16.2.0/24 > network_host=10.16.137.102 > force_dhcp_release=true > dhcp_domain=novalocal > logdir=/var/log/nova > rpc_backend=nova.openstack.common.rpc.impl_qpid > rabbit_host=localhost > qpid_hostname=10.16.137.100 > libvirt_type=kvm > libvirt_inject_partition=-1 > novncproxy_base_url=http://10.16.137.100:6080/vnc_auto.html > vncserver_listen=10.16.137.102 > vncserver_proxyclient_address=10.16.137.102 > vnc_enabled=true > image_service=nova.image.glance.GlanceImageService > multi_host = True > [trusted_computing] > [keystone_authtoken] > > > thanks, jacob > > > > > > > > > Thanks in advance, > > > > > > > > Nicolas. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Fri Apr 12 15:04:04 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Fri, 12 Apr 2013 15:04:04 +0000 Subject: [rhos-list] how to install more compute nodes In-Reply-To: <51680DD4.6000406@redhat.com> References: <5166C74F.5070502@redhat.com> <5166CDDE.9010602@redhat.com> <5166D667.5030305@redhat.com> <51680DD4.6000406@redhat.com> Message-ID: Ok thanks. I have some problems with the public interface, but I think it shouldn?t be a big problem to solve. So for my compute node, the only thing I need to install is ?yum install openstack-nova-compute? and nothing else (I will try Cinder later if it isn?t stable yet) ? After that just modify the nova.conf , create the private subnet with the bridge interface and it should work? I will begin with that and give feedback about it. Cheers, Nicolas. From: Jacob Liberman [mailto:jliberma at redhat.com] Sent: vendredi 12 avril 2013 15:36 To: Vogel Nicolas Cc: 'rhos-list at redhat.com' Subject: Re: [rhos-list] how to install more compute nodes On 04/12/2013 03:34 AM, Vogel Nicolas wrote: Just 1 more question about floating IP : For my controller, I used the em1 interface for management purpose and for the communication between all openstack services (subnet 10.192.1.x./24). I configured then em1 as my ?flat interface? for my private VM subnet (192.168.x.x/24) and the demonetbr0 bridge. My em2 interface is the ?public interface?. Is that right? Em2 has already a fixed IP address and I want to allocate a floating IP from the same subnet to em2. yes, if you want the instance to have a public address, the floating IP can be on the same network as the public interface. Thanks, Nico. From: Jacob Liberman [mailto:jliberma at redhat.com] Sent: jeudi 11 avril 2013 17:28 To: Vogel Nicolas Cc: 'rhos-list at redhat.com'; 'Perry Myers' Subject: Re: [rhos-list] how to install more compute nodes On 04/11/2013 10:19 AM, Vogel Nicolas wrote: Thanks Jacob for this infos ! I think I must also create keystonerc_admin and keystonerc_username file on my controller and source it on demand to make my install right? it depends on how you make the changes if you are using the command line tools then yes if you are using packstack the keystonerc will be installed automatically wherever you install the client tools you can also download the user environment vars from the horizon dashboard Do I also have to modify or complete something on my controller node so that he knows about the new compute node? no, just start the services on the compute node. it will add itself if everything is configured correctly. you can verify with "nova-manage service list" Cheers, Nicolas. From: Jacob Liberman [mailto:jliberma at redhat.com] Sent: jeudi 11 avril 2013 16:51 To: Perry Myers Cc: Vogel Nicolas; 'rhos-list at redhat.com' Subject: Re: [rhos-list] how to install more compute nodes On 04/11/2013 09:23 AM, Perry Myers wrote: On 04/11/2013 09:59 AM, Vogel Nicolas wrote: Hi, I just finished to install a controller node with the latest official RedHat doc (but I?m working with CentOS 6.3 and EPEL packages). I?m using nova-network with FlatDHCP and not quantum in this test. Now I wan?t to extend my cloud with more compute nodes but i?m not really sure about the different services who must be installed on each compute node. In a first time, I wan?t to centralize all my services on the controller, the computes nodes must give me just more CPU and more disk space. So I think that basically I just need to install nova-compute, nova-api and cinder on the compute nodes is that right? nova-compute and cinder, but I do not think you need additional nova-api on the add'l compute nodes nova-compute, nova-network (if you want multi_host/HA networking) and nova-metadata-api if you are passing any customizations to the instances during boot you can run cinder-volumes on all nodes but there are some issues. better to use a centralized cinder server or cinder backed by a distributed file system. you specify the other service endpoints in the compute node's nova.conf. How do I configure this services on the compute node so they know that Keystone, Glance, etc.. are on the controller? Jacob, do you have a writeup around this area? Taking an existing RHOS install and adding compute nodes to it? yes, it will be publicly available in the next few weeks. i am happy to answer specific questions before the document is available. Perry If someone has nova.conf and keystone.conf example from it would be very helpful. Here is a nova.conf from a compute node. The controller IP (glance, keystone, cinder, nova-scheduler) is 10.16.37.100 The compute node IP (nova-compute,nova-network) is 10.16.137.102 the metadata_hostvalue may differ depending on what you are running where [DEFAULT] verbose=false connection_type=libvirt sql_connection=mysql://nova:9f63b4ec6b074b1c at 10.16.137.100/nova state_path=/var/lib/nova lock_path=/var/lib/nova/tmp glance_api_servers=10.16.137.100:9292 metadata_host=10.16.137.100 network_manager=nova.network.manager.FlatDHCPManager rootwrap_config=/etc/nova/rootwrap.conf service_down_time=60 volume_api_class=nova.volume.cinder.API auth_strategy=keystone compute_driver=libvirt.LibvirtDriver public_interface=eth0 dhcpbridge=/usr/bin/nova-dhcpbridge flat_network_bridge=br100 flat_injected=false flat_interface=eth1 floating_range=10.16.143.108/30 fixed_range=172.16.2.0/24 network_host=10.16.137.102 force_dhcp_release=true dhcp_domain=novalocal logdir=/var/log/nova rpc_backend=nova.openstack.common.rpc.impl_qpid rabbit_host=localhost qpid_hostname=10.16.137.100 libvirt_type=kvm libvirt_inject_partition=-1 novncproxy_base_url=http://10.16.137.100:6080/vnc_auto.html vncserver_listen=10.16.137.102 vncserver_proxyclient_address=10.16.137.102 vnc_enabled=true image_service=nova.image.glance.GlanceImageService multi_host = True [trusted_computing] [keystone_authtoken] thanks, jacob Thanks in advance, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahumbe at redhat.com Mon Apr 15 12:09:48 2013 From: ahumbe at redhat.com (Ashish Humbe) Date: Mon, 15 Apr 2013 17:39:48 +0530 Subject: [rhos-list] Is OpenShift & OpenStack compliance with Federal Risk Assessment Program (FedRAMP)? Message-ID: <516BEE0C.3090405@redhat.com> Hello, Customer (Uspto) is implementing OpenShift in their environment, they requested details on - Is OpenShift compliance with Federal Risk Assessment Program (FedRAMP)? Do we have any documentation or details on this ? Thanks, Ashish From rich.minton at lmco.com Wed Apr 17 17:33:15 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 17 Apr 2013 17:33:15 +0000 Subject: [rhos-list] Quantum config questions. Message-ID: Question for you smart people... I have Openstack configured to use Quantum networking. I thought I had all the entries needed in nova.conf but when I run "nova-manage config list" some of the values aren't getting picked up from my nova.conf file. Also, based on the fact that I'm using Quantum Networking are there additional config items that I should uncomment in nova.conf? nova-manage config list fake_network = False network_topic = network instance_dns_manager = nova.network.dns_driver.DNSDriver floating_ip_dns_manager = nova.network.dns_driver.DNSDriver network_manager = nova.network.manager.FlatDHCPManager network_driver = nova.network.linux_net network_api_class = nova.network.api.API networks_path = /var/lib/nova/networks linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver flat_network_dns = 8.8.4.4 num_networks = 1 network_host = localhost l3_lib = nova.network.l3.LinuxNetL3 nova.conf # fake_network=false # network_topic=network # instance_dns_manager=nova.network.dns_driver.DNSDriver # floating_ip_dns_manager=nova.network.dns_driver.DNSDriver # network_manager=nova.network.manager.FlatDHCPManager # network_driver=nova.network.linux_net network_api_class=nova.network.quantumv2.api.API # networks_path=$state_path/networks # linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver # flat_network_dns=8.8.4.4 # num_networks=1 # network_size=256 # network_host=nova # l3_lib=nova.network.l3.LinuxNetL3 Thanks, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronac07 at gmail.com Wed Apr 17 17:40:14 2013 From: ronac07 at gmail.com (Ronald Cronenwett) Date: Wed, 17 Apr 2013 13:40:14 -0400 Subject: [rhos-list] Red Hat Distribution OpenStack (RDO) Message-ID: I saw this was announced at the Openstack Summit. Does that mean the RHOS beta preview is ending? Or will the RHOS entitlement be updated to Grizzly? Just curious which way I should go with some test systems I've set up. The latest I've been working with was to implement Grizzly from EPEL on aCentos 6.4 rebuild. But I still have my RHEL systems that came with the preview running Folsom. Thanks Ron Cronenwett -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Wed Apr 17 19:20:39 2013 From: dneary at redhat.com (Dave Neary) Date: Wed, 17 Apr 2013 12:20:39 -0700 Subject: [rhos-list] Announcing RDO Message-ID: <516EF607.2030506@redhat.com> Hi everyone, The RDO community site is now live! RDO is Red Hat's community-supported distribution of OpenStack for Red Hat Enterprise Linux and its clones, and for Fedora. The site is now online at: http://openstack.redhat.com What we've announced is two things: * We are providing well integrated, easy to install packages of OpenStack Grizzly for Red Hat Enterprise Linux 6.4, and equivalent versions of CentOS, Scientific Linux, etc, and for Fedora 18. * We have released a website at openstack.redhat.com to grow a community of OpenStack users on Red Hat platforms If you are interested in trying out OpenStack Grizzly on RHEL, or other Enterprise Linux distributions, then you are welcome to install it, join our forums and share your experiences. For those who prefer mailing lists to forums, we also have a mailing list, rdo-list: https://www.redhat.com/mailman/listinfo/rdo-list What does this mean for Red Hat OpenStack users, and subscribers to rhos-list? The short answer is that this adds a new option for you. If you would like to install a community supported OpenStack Grizzly distribution on Red Hat Enterprise Linux, CentOS or Scientific Linux in anticipation of a future Red Hat supported Grizzly-based product, then RDO is a good choice. If you are interested in deploying enterprise-hardened Folsom on Red Hat Enterprise Linux, then Red Hat OpenStack early adopter Edition is a great choice, and rhos-list is the best place to get help with that. You can read more about the RDO announcement at http://www.redhat.com/about/news/press-archive/2013/4/red-hat-advances-its-openstack-enterprise-and-community-technologies-and-roadmap Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From pmyers at redhat.com Thu Apr 18 16:28:06 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 18 Apr 2013 09:28:06 -0700 Subject: [rhos-list] Red Hat Distribution OpenStack (RDO) In-Reply-To: References: Message-ID: <51701F16.8020000@redhat.com> On 04/17/2013 10:40 AM, Ronald Cronenwett wrote: > I saw this was announced at the Openstack Summit. Does that mean the > RHOS beta preview is ending?Or will the RHOS entitlement be updated to > Grizzly? Just curious which way I should go with some test systems I've > set up. The latest I've been working with was to implement Grizzly from > EPEL on a Centos 6.4 rebuild.But I still have my RHEL systems that came > with the preview running Folsom. Good questions :) Let me outline our release cadence in detail, starting with RHOS 1.0. That might make things more clear. Aug 2012: RHOS 1.0 Preview (Essex) available for RHEL 6.3 hosts Nov 2012: RHOS 2.0 Preview (Folsom) available for RHEL 6.3 hosts Feb 2013: RHOS 2.1 Preview (Folsom) available for RHEL 6.4 hosts Apr 2013: RHOS 2.1 released as limited availability via our early adopter program All of the above releases were made available through an evaluation program, which you can sign up for on http://redhat.com/openstack Even though RHOS 2.1 is released (and therefore the Preview is ended), you can still sign up for evaluation licenses and get access to the official packages (and updates) that way. And your existing evaluation with access to the RHOS 2.1 Preview repository seamlessly should transition to the formal RHOS 2.1 channel (in fact, the channel is the same) Apr 2013: RDO Grizzly available for EL 6.4+ hosts (and F18) What we will do in the next few weeks, is take the RDO Grizzly EL packages as a starting point, to create a RHOS 3.0 Preview. So initially RDO Grizzly for EL and RHOS 3.0 will be the same. RDO Grizzly will march forward, absorbing updates from the Grizzly upstream stable branches. RHOS 3.0 Preview will also march forward, absorbing updates from the upstream Grizzly stable branches, but it will also selectively incorporate patches from the upstream trunk (Havana) for bug fixes and enhancements that are not invasive. When we release RHOS 3.0 GA later this summer (approx Aug 2013), the RHOS 3.0 Preview will end and be replaced by the GA packages. But even after RHOS 3.0 transitions from Preview to GA, we'll still provide the process to sign up for evaluation licenses. One comment on this: > The latest I've been working with was to implement Grizzly from > EPEL on a Centos 6.4 rebuild Grizzly is not in EPEL. It's only available via the repositories as part of the RDO community. The packages in EPEL are still Folsom based, and there are no plans to update those to Grizzly. Let me know if those clarifications help, or if you have follow up questions :) Perry From ronac07 at gmail.com Thu Apr 18 17:16:04 2013 From: ronac07 at gmail.com (Ronald Cronenwett) Date: Thu, 18 Apr 2013 13:16:04 -0400 Subject: [rhos-list] Red Hat Distribution OpenStack (RDO) In-Reply-To: <51701F16.8020000@redhat.com> References: <51701F16.8020000@redhat.com> Message-ID: Perry, Thanks for the clarification. That's good information. Just one question. On my Centos builds I've loaded Openstack and Packstack from: [epel-openstack-grizzly] name=OpenStack Grizzly Repository for EPEL 6 baseurl= http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6 enabled=1 skip_if_unavailable=1 gpgcheck=0 priority=98 What is this repo and how does it relate to RDO grizzly? Would there be any issues if I switch to RDO repos? Thanks Ron On Thu, Apr 18, 2013 at 12:28 PM, Perry Myers wrote: > On 04/17/2013 10:40 AM, Ronald Cronenwett wrote: > > I saw this was announced at the Openstack Summit. Does that mean the > > RHOS beta preview is ending?Or will the RHOS entitlement be updated to > > Grizzly? Just curious which way I should go with some test systems I've > > set up. The latest I've been working with was to implement Grizzly from > > EPEL on a Centos 6.4 rebuild.But I still have my RHEL systems that came > > with the preview running Folsom. > > Good questions :) > > Let me outline our release cadence in detail, starting with RHOS 1.0. > That might make things more clear. > > Aug 2012: RHOS 1.0 Preview (Essex) available for RHEL 6.3 hosts > Nov 2012: RHOS 2.0 Preview (Folsom) available for RHEL 6.3 hosts > Feb 2013: RHOS 2.1 Preview (Folsom) available for RHEL 6.4 hosts > Apr 2013: RHOS 2.1 released as limited availability via our early > adopter program > > All of the above releases were made available through an evaluation > program, which you can sign up for on http://redhat.com/openstack > > Even though RHOS 2.1 is released (and therefore the Preview is ended), > you can still sign up for evaluation licenses and get access to the > official packages (and updates) that way. And your existing evaluation > with access to the RHOS 2.1 Preview repository seamlessly should > transition to the formal RHOS 2.1 channel (in fact, the channel is the > same) > > Apr 2013: RDO Grizzly available for EL 6.4+ hosts (and F18) > > What we will do in the next few weeks, is take the RDO Grizzly EL > packages as a starting point, to create a RHOS 3.0 Preview. > > So initially RDO Grizzly for EL and RHOS 3.0 will be the same. RDO > Grizzly will march forward, absorbing updates from the Grizzly upstream > stable branches. > > RHOS 3.0 Preview will also march forward, absorbing updates from the > upstream Grizzly stable branches, but it will also selectively > incorporate patches from the upstream trunk (Havana) for bug fixes and > enhancements that are not invasive. > > When we release RHOS 3.0 GA later this summer (approx Aug 2013), the > RHOS 3.0 Preview will end and be replaced by the GA packages. > > But even after RHOS 3.0 transitions from Preview to GA, we'll still > provide the process to sign up for evaluation licenses. > > One comment on this: > > The latest I've been working with was to implement Grizzly from > > EPEL on a Centos 6.4 rebuild > > Grizzly is not in EPEL. It's only available via the repositories as > part of the RDO community. The packages in EPEL are still Folsom based, > and there are no plans to update those to Grizzly. > > Let me know if those clarifications help, or if you have follow up > questions :) > > Perry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Thu Apr 18 16:31:17 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 18 Apr 2013 09:31:17 -0700 Subject: [rhos-list] Quantum config questions. In-Reply-To: References: Message-ID: <51701FD5.1000909@redhat.com> On 04/17/2013 10:33 AM, Minton, Rich wrote: > Question for you smart people? I don't qualify for that comment, but I've included the other quantum developers on cc: to weigh in :) Just to note though, the entire quantum development team is presently at OpenStack Summit in Portland, busy trying to figure out what we're going to do in Havana, so responses may be slightly delayed. Perry > > > I have Openstack configured to use Quantum networking. I thought I had > all the entries needed in nova.conf but when I run ?nova-manage config > list? some of the values aren?t getting picked up from my nova.conf > file. Also, based on the fact that I?m using Quantum Networking are > there additional config items that I should uncomment in nova.conf? > > > > nova-manage config list > > > > fake_network = False > > network_topic = network > > instance_dns_manager = nova.network.dns_driver.DNSDriver > > floating_ip_dns_manager = nova.network.dns_driver.DNSDriver > > network_manager = nova.network.manager.FlatDHCPManager > > network_driver = nova.network.linux_net > > network_api_class = nova.network.api.API > > networks_path = /var/lib/nova/networks > > linuxnet_interface_driver = > nova.network.linux_net.LinuxBridgeInterfaceDriver > > flat_network_dns = 8.8.4.4 > > num_networks = 1 > > network_host = localhost > > l3_lib = nova.network.l3.LinuxNetL3 > > > > nova.conf > > > > # fake_network=false > > # network_topic=network > > # instance_dns_manager=nova.network.dns_driver.DNSDriver > > # floating_ip_dns_manager=nova.network.dns_driver.DNSDriver > > # network_manager=nova.network.manager.FlatDHCPManager > > # network_driver=nova.network.linux_net > > network_api_class=nova.network.quantumv2.api.API > > # networks_path=$state_path/networks > > # > linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver > > # flat_network_dns=8.8.4.4 > > # num_networks=1 > > # network_size=256 > > # network_host=nova > > # l3_lib=nova.network.l3.LinuxNetL3 > > > > Thanks, > > Rick > > > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From ronac07 at gmail.com Thu Apr 18 18:15:14 2013 From: ronac07 at gmail.com (Ronald Cronenwett) Date: Thu, 18 Apr 2013 14:15:14 -0400 Subject: [rhos-list] Red Hat Distribution OpenStack (RDO) In-Reply-To: References: <51701F16.8020000@redhat.com> Message-ID: I just pulled down rdo-release-grizzly-1.noarch.rpm and see it is pointing to the same thing as epel-openstack-grizzly.repo. So I've been using RDO all along :) Browsing to http://rdo.fedorapeople.org/openstack/openstack-grizzly/ makes it clear. But I think I got to that directory from a different URL originally. Thanks again for the release info. Ron On Thu, Apr 18, 2013 at 1:16 PM, Ronald Cronenwett wrote: > Perry, > > Thanks for the clarification. That's good information. > > Just one question. On my Centos builds I've loaded Openstack and Packstack > from: > > [epel-openstack-grizzly] > name=OpenStack Grizzly Repository for EPEL 6 > baseurl= > http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6 > enabled=1 > skip_if_unavailable=1 > gpgcheck=0 > priority=98 > > What is this repo and how does it relate to RDO grizzly? Would there be > any issues if I switch to RDO repos? > > Thanks > > Ron > > > > On Thu, Apr 18, 2013 at 12:28 PM, Perry Myers wrote: > >> On 04/17/2013 10:40 AM, Ronald Cronenwett wrote: >> > I saw this was announced at the Openstack Summit. Does that mean the >> > RHOS beta preview is ending?Or will the RHOS entitlement be updated to >> > Grizzly? Just curious which way I should go with some test systems I've >> > set up. The latest I've been working with was to implement Grizzly from >> > EPEL on a Centos 6.4 rebuild.But I still have my RHEL systems that came >> > with the preview running Folsom. >> >> Good questions :) >> >> Let me outline our release cadence in detail, starting with RHOS 1.0. >> That might make things more clear. >> >> Aug 2012: RHOS 1.0 Preview (Essex) available for RHEL 6.3 hosts >> Nov 2012: RHOS 2.0 Preview (Folsom) available for RHEL 6.3 hosts >> Feb 2013: RHOS 2.1 Preview (Folsom) available for RHEL 6.4 hosts >> Apr 2013: RHOS 2.1 released as limited availability via our early >> adopter program >> >> All of the above releases were made available through an evaluation >> program, which you can sign up for on http://redhat.com/openstack >> >> Even though RHOS 2.1 is released (and therefore the Preview is ended), >> you can still sign up for evaluation licenses and get access to the >> official packages (and updates) that way. And your existing evaluation >> with access to the RHOS 2.1 Preview repository seamlessly should >> transition to the formal RHOS 2.1 channel (in fact, the channel is the >> same) >> >> Apr 2013: RDO Grizzly available for EL 6.4+ hosts (and F18) >> >> What we will do in the next few weeks, is take the RDO Grizzly EL >> packages as a starting point, to create a RHOS 3.0 Preview. >> >> So initially RDO Grizzly for EL and RHOS 3.0 will be the same. RDO >> Grizzly will march forward, absorbing updates from the Grizzly upstream >> stable branches. >> >> RHOS 3.0 Preview will also march forward, absorbing updates from the >> upstream Grizzly stable branches, but it will also selectively >> incorporate patches from the upstream trunk (Havana) for bug fixes and >> enhancements that are not invasive. >> >> When we release RHOS 3.0 GA later this summer (approx Aug 2013), the >> RHOS 3.0 Preview will end and be replaced by the GA packages. >> >> But even after RHOS 3.0 transitions from Preview to GA, we'll still >> provide the process to sign up for evaluation licenses. >> >> One comment on this: >> > The latest I've been working with was to implement Grizzly from >> > EPEL on a Centos 6.4 rebuild >> >> Grizzly is not in EPEL. It's only available via the repositories as >> part of the RDO community. The packages in EPEL are still Folsom based, >> and there are no plans to update those to Grizzly. >> >> Let me know if those clarifications help, or if you have follow up >> questions :) >> >> Perry >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Thu Apr 18 22:52:45 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 18 Apr 2013 15:52:45 -0700 Subject: [rhos-list] Red Hat Distribution OpenStack (RDO) In-Reply-To: References: <51701F16.8020000@redhat.com> Message-ID: <5170793D.5070702@redhat.com> On 04/18/2013 11:15 AM, Ronald Cronenwett wrote: > I just pulled down rdo-release-grizzly-1.noarch.rpm and see it is > pointing to the same thing as epel-openstack-grizzly.repo. So I've been > using RDO all along :) Correct. The packages have been around since we started packaging Grizzly for EL back with the Grizzly development milestones. > Browsing to http://rdo.fedorapeople.org/openstack/openstack-grizzly/ > makes it clear. But I think I got to that directory from a different URL > originally. Yep > Thanks again for the release info. > > Ron > > > On Thu, Apr 18, 2013 at 1:16 PM, Ronald Cronenwett > wrote: > > Perry, > > Thanks for the clarification. That's good information. > > Just one question. On my Centos builds I've loaded Openstack and > Packstack from: > > [epel-openstack-grizzly] > name=OpenStack Grizzly Repository for EPEL 6 > baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6 > enabled=1 > skip_if_unavailable=1 > gpgcheck=0 > priority=98 > > What is this repo and how does it relate to RDO grizzly? Would there > be any issues if I switch to RDO repos? > > Thanks > > Ron > > > > On Thu, Apr 18, 2013 at 12:28 PM, Perry Myers > wrote: > > On 04/17/2013 10:40 AM, Ronald Cronenwett wrote: > > I saw this was announced at the Openstack Summit. Does that > mean the > > RHOS beta preview is ending?Or will the RHOS entitlement be > updated to > > Grizzly? Just curious which way I should go with some test > systems I've > > set up. The latest I've been working with was to implement > Grizzly from > > EPEL on a Centos 6.4 rebuild.But I still have my RHEL systems > that came > > with the preview running Folsom. > > Good questions :) > > Let me outline our release cadence in detail, starting with RHOS > 1.0. > That might make things more clear. > > Aug 2012: RHOS 1.0 Preview (Essex) available for RHEL 6.3 hosts > Nov 2012: RHOS 2.0 Preview (Folsom) available for RHEL 6.3 hosts > Feb 2013: RHOS 2.1 Preview (Folsom) available for RHEL 6.4 hosts > Apr 2013: RHOS 2.1 released as limited availability via our early > adopter program > > All of the above releases were made available through an evaluation > program, which you can sign up for on http://redhat.com/openstack > > Even though RHOS 2.1 is released (and therefore the Preview is > ended), > you can still sign up for evaluation licenses and get access to the > official packages (and updates) that way. And your existing > evaluation > with access to the RHOS 2.1 Preview repository seamlessly should > transition to the formal RHOS 2.1 channel (in fact, the channel > is the same) > > Apr 2013: RDO Grizzly available for EL 6.4+ hosts (and F18) > > What we will do in the next few weeks, is take the RDO Grizzly EL > packages as a starting point, to create a RHOS 3.0 Preview. > > So initially RDO Grizzly for EL and RHOS 3.0 will be the same. RDO > Grizzly will march forward, absorbing updates from the Grizzly > upstream > stable branches. > > RHOS 3.0 Preview will also march forward, absorbing updates from the > upstream Grizzly stable branches, but it will also selectively > incorporate patches from the upstream trunk (Havana) for bug > fixes and > enhancements that are not invasive. > > When we release RHOS 3.0 GA later this summer (approx Aug 2013), the > RHOS 3.0 Preview will end and be replaced by the GA packages. > > But even after RHOS 3.0 transitions from Preview to GA, we'll still > provide the process to sign up for evaluation licenses. > > One comment on this: > > The latest I've been working with was to implement Grizzly from > > EPEL on a Centos 6.4 rebuild > > Grizzly is not in EPEL. It's only available via the repositories as > part of the RDO community. The packages in EPEL are still > Folsom based, > and there are no plans to update those to Grizzly. > > Let me know if those clarifications help, or if you have follow up > questions :) > > Perry > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From gkotton at redhat.com Fri Apr 19 02:56:07 2013 From: gkotton at redhat.com (Gary Kotton) Date: Fri, 19 Apr 2013 05:56:07 +0300 Subject: [rhos-list] Quantum config questions. In-Reply-To: <51701FD5.1000909@redhat.com> References: <51701FD5.1000909@redhat.com> Message-ID: <5170B247.3000905@redhat.com> On 04/18/2013 07:31 PM, Perry Myers wrote: > On 04/17/2013 10:33 AM, Minton, Rich wrote: >> Question for you smart people? > I don't qualify for that comment, but I've included the other quantum > developers on cc: to weigh in :) > > Just to note though, the entire quantum development team is presently at > OpenStack Summit in Portland, busy trying to figure out what we're going > to do in Havana, so responses may be slightly delayed. > > Perry > >> >> >> I have Openstack configured to use Quantum networking. I thought I had >> all the entries needed in nova.conf but when I run ?nova-manage config >> list? some of the values aren?t getting picked up from my nova.conf >> file. Also, based on the fact that I?m using Quantum Networking are >> there additional config items that I should uncomment in nova.conf? I am not familiar with the command that you are invoking. I have a few questions regarding the quantum configuration: 1. Did you run the quantum helper installation scripts, for example quantum-server-setup? These configure the relevant configuration values in the nova conf file 2. There is a fedora wiki explaining these https://fedoraproject.org/wiki/Quantum (this is for the latest version but the script commands are the same) 3. Which plugin are you using? The nova configuration file should have configurations which are specific to the plugin Thanks Gary >> >> >> >> nova-manage config list >> >> >> >> fake_network = False >> >> network_topic = network >> >> instance_dns_manager = nova.network.dns_driver.DNSDriver >> >> floating_ip_dns_manager = nova.network.dns_driver.DNSDriver >> >> network_manager = nova.network.manager.FlatDHCPManager >> >> network_driver = nova.network.linux_net >> >> network_api_class = nova.network.api.API >> >> networks_path = /var/lib/nova/networks >> >> linuxnet_interface_driver = >> nova.network.linux_net.LinuxBridgeInterfaceDriver >> >> flat_network_dns = 8.8.4.4 >> >> num_networks = 1 >> >> network_host = localhost >> >> l3_lib = nova.network.l3.LinuxNetL3 >> >> >> >> nova.conf >> >> >> >> # fake_network=false >> >> # network_topic=network >> >> # instance_dns_manager=nova.network.dns_driver.DNSDriver >> >> # floating_ip_dns_manager=nova.network.dns_driver.DNSDriver >> >> # network_manager=nova.network.manager.FlatDHCPManager >> >> # network_driver=nova.network.linux_net >> >> network_api_class=nova.network.quantumv2.api.API >> >> # networks_path=$state_path/networks >> >> # >> linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver >> >> # flat_network_dns=8.8.4.4 >> >> # num_networks=1 >> >> # network_size=256 >> >> # network_host=nova >> >> # l3_lib=nova.network.l3.LinuxNetL3 >> >> >> >> Thanks, >> >> Rick >> >> >> >> _Richard Minton_ >> >> LMICC Systems Administrator >> >> 4000 Geerdes Blvd, 13D31 >> >> King of Prussia, PA 19406 >> >> Phone: 610-354-5482 >> >> >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> From sellis at redhat.com Fri Apr 19 06:03:12 2013 From: sellis at redhat.com (Steven Ellis) Date: Fri, 19 Apr 2013 18:03:12 +1200 Subject: [rhos-list] RHOS and Ceph Message-ID: <5170DE20.30304@redhat.com> So I've got a customer very excited about our RHOS announcements and RDO and they are a potential for our early adopter program. One of their key questions is when (note when, not if) will Red Hat be shipping Ceph as part of their Enterprise Supported Open Stack environment. From their perspective RHS isn't a suitable scalable backend for all their Open Stack use cases, in particular high performance I/O block For an RHOS deploy today what do we recommend as a storage backend for the object/block storage? Steve -- Steven Ellis Solution Architect - Red Hat New Zealand *T:* +64 9 927 8856 *M:* +64 21 321 673 *E:* sellis at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Fri Apr 19 09:00:26 2013 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Fri, 19 Apr 2013 11:00:26 +0200 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <5170DE20.30304@redhat.com> References: <5170DE20.30304@redhat.com> Message-ID: <517107AA.7010106@cern.ch> Hi, On 04/19/2013 08:03 AM, Steven Ellis wrote: > So I've got a customer very excited about our RHOS announcements and > RDO and they are a potential for our early adopter program. We are as well excited to get ceph working with RHEL6.X/RHEL7 Frist step, the missing bit is the "qemu-kvm" package compile against rbd. Does anyone can share some updates ? (Feature request BZ921668.) -- Thomas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Fri Apr 19 11:37:49 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 19 Apr 2013 11:37:49 +0000 Subject: [rhos-list] EXTERNAL: Re: Quantum config questions. In-Reply-To: <5170B247.3000905@redhat.com> References: <51701FD5.1000909@redhat.com> <5170B247.3000905@redhat.com> Message-ID: Yes, I did run the setup command as part of the installation process. I am using the Openvswitch plugin. Thanks for the link, I'll take a look. "nova-manage config list" is supposed to list all of the configuration items that are set in nova. Some would be defaults if there isn't a corresponding entry in nova.conf. I would think that anything specifically called out in nova.conf would be listed in the output of the command but that wasn't the case for some. Rick -----Original Message----- From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Thursday, April 18, 2013 10:56 PM To: Perry Myers Cc: Minton, Rich; rhos-list at redhat.com; Robert Kukura; Maru Newby; Terry Wilson; Ryan O'Hara Subject: EXTERNAL: Re: [rhos-list] Quantum config questions. On 04/18/2013 07:31 PM, Perry Myers wrote: > On 04/17/2013 10:33 AM, Minton, Rich wrote: >> Question for you smart people... > I don't qualify for that comment, but I've included the other quantum > developers on cc: to weigh in :) > > Just to note though, the entire quantum development team is presently > at OpenStack Summit in Portland, busy trying to figure out what we're > going to do in Havana, so responses may be slightly delayed. > > Perry > >> >> >> I have Openstack configured to use Quantum networking. I thought I >> had all the entries needed in nova.conf but when I run "nova-manage >> config list" some of the values aren't getting picked up from my >> nova.conf file. Also, based on the fact that I'm using Quantum >> Networking are there additional config items that I should uncomment in nova.conf? I am not familiar with the command that you are invoking. I have a few questions regarding the quantum configuration: 1. Did you run the quantum helper installation scripts, for example quantum-server-setup? These configure the relevant configuration values in the nova conf file 2. There is a fedora wiki explaining these https://fedoraproject.org/wiki/Quantum (this is for the latest version but the script commands are the same) 3. Which plugin are you using? The nova configuration file should have configurations which are specific to the plugin Thanks Gary >> >> >> >> nova-manage config list >> >> >> >> fake_network = False >> >> network_topic = network >> >> instance_dns_manager = nova.network.dns_driver.DNSDriver >> >> floating_ip_dns_manager = nova.network.dns_driver.DNSDriver >> >> network_manager = nova.network.manager.FlatDHCPManager >> >> network_driver = nova.network.linux_net >> >> network_api_class = nova.network.api.API >> >> networks_path = /var/lib/nova/networks >> >> linuxnet_interface_driver = >> nova.network.linux_net.LinuxBridgeInterfaceDriver >> >> flat_network_dns = 8.8.4.4 >> >> num_networks = 1 >> >> network_host = localhost >> >> l3_lib = nova.network.l3.LinuxNetL3 >> >> >> >> nova.conf >> >> >> >> # fake_network=false >> >> # network_topic=network >> >> # instance_dns_manager=nova.network.dns_driver.DNSDriver >> >> # floating_ip_dns_manager=nova.network.dns_driver.DNSDriver >> >> # network_manager=nova.network.manager.FlatDHCPManager >> >> # network_driver=nova.network.linux_net >> >> network_api_class=nova.network.quantumv2.api.API >> >> # networks_path=$state_path/networks >> >> # >> linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterface >> Driver >> >> # flat_network_dns=8.8.4.4 >> >> # num_networks=1 >> >> # network_size=256 >> >> # network_host=nova >> >> # l3_lib=nova.network.l3.LinuxNetL3 >> >> >> >> Thanks, >> >> Rick >> >> >> >> _Richard Minton_ >> >> LMICC Systems Administrator >> >> 4000 Geerdes Blvd, 13D31 >> >> King of Prussia, PA 19406 >> >> Phone: 610-354-5482 >> >> >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> From gkotton at redhat.com Fri Apr 19 15:05:37 2013 From: gkotton at redhat.com (Gary Kotton) Date: Fri, 19 Apr 2013 18:05:37 +0300 Subject: [rhos-list] EXTERNAL: Re: Quantum config questions. In-Reply-To: References: <51701FD5.1000909@redhat.com> <5170B247.3000905@redhat.com> Message-ID: <51715D41.9040100@redhat.com> On 04/19/2013 02:37 PM, Minton, Rich wrote: > Yes, I did run the setup command as part of the installation process. I am using the Openvswitch plugin. Thanks for the link, I'll take a look. > > "nova-manage config list" is supposed to list all of the configuration items that are set in nova. Some would be defaults if there isn't a corresponding entry in nova.conf. I would think that anything specifically called out in nova.conf would be listed in the output of the command but that wasn't the case for some. Thanks for the clarification. When you run the script above the configuration entries for quantum in the nova.conf are set. These are:- # The quantum driver in nova network_api_classi=nova.network.quantumv2.api.API # Quantum keystone authentication values quantum_admin_username= quantum_admin_password= quantum_admin_auth_url= quantum_auth_strategy=keystone quantum_admin_tenant_name= # The Quantum service quantum_url= # The specific plugin driver # OVS driver libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver I think that all of the values that you have listed below are not relevant and they are specific to the traditional nova networking. Thanks Gary > > Rick > > -----Original Message----- > From: Gary Kotton [mailto:gkotton at redhat.com] > Sent: Thursday, April 18, 2013 10:56 PM > To: Perry Myers > Cc: Minton, Rich; rhos-list at redhat.com; Robert Kukura; Maru Newby; Terry Wilson; Ryan O'Hara > Subject: EXTERNAL: Re: [rhos-list] Quantum config questions. > > On 04/18/2013 07:31 PM, Perry Myers wrote: >> On 04/17/2013 10:33 AM, Minton, Rich wrote: >>> Question for you smart people... >> I don't qualify for that comment, but I've included the other quantum >> developers on cc: to weigh in :) >> >> Just to note though, the entire quantum development team is presently >> at OpenStack Summit in Portland, busy trying to figure out what we're >> going to do in Havana, so responses may be slightly delayed. >> >> Perry >> >>> >>> I have Openstack configured to use Quantum networking. I thought I >>> had all the entries needed in nova.conf but when I run "nova-manage >>> config list" some of the values aren't getting picked up from my >>> nova.conf file. Also, based on the fact that I'm using Quantum >>> Networking are there additional config items that I should uncomment in nova.conf? > I am not familiar with the command that you are invoking. I have a few questions regarding the quantum configuration: > 1. Did you run the quantum helper installation scripts, for example quantum-server-setup? These configure the relevant configuration values in the nova conf file 2. There is a fedora wiki explaining these https://fedoraproject.org/wiki/Quantum (this is for the latest version but the script commands are the same) 3. Which plugin are you using? The nova configuration file should have configurations which are specific to the plugin > > Thanks > Gary >>> >>> >>> nova-manage config list >>> >>> >>> >>> fake_network = False >>> >>> network_topic = network >>> >>> instance_dns_manager = nova.network.dns_driver.DNSDriver >>> >>> floating_ip_dns_manager = nova.network.dns_driver.DNSDriver >>> >>> network_manager = nova.network.manager.FlatDHCPManager >>> >>> network_driver = nova.network.linux_net >>> >>> network_api_class = nova.network.api.API >>> >>> networks_path = /var/lib/nova/networks >>> >>> linuxnet_interface_driver = >>> nova.network.linux_net.LinuxBridgeInterfaceDriver >>> >>> flat_network_dns = 8.8.4.4 >>> >>> num_networks = 1 >>> >>> network_host = localhost >>> >>> l3_lib = nova.network.l3.LinuxNetL3 >>> >>> >>> >>> nova.conf >>> >>> >>> >>> # fake_network=false >>> >>> # network_topic=network >>> >>> # instance_dns_manager=nova.network.dns_driver.DNSDriver >>> >>> # floating_ip_dns_manager=nova.network.dns_driver.DNSDriver >>> >>> # network_manager=nova.network.manager.FlatDHCPManager >>> >>> # network_driver=nova.network.linux_net >>> >>> network_api_class=nova.network.quantumv2.api.API >>> >>> # networks_path=$state_path/networks >>> >>> # >>> linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterface >>> Driver >>> >>> # flat_network_dns=8.8.4.4 >>> >>> # num_networks=1 >>> >>> # network_size=256 >>> >>> # network_host=nova >>> >>> # l3_lib=nova.network.l3.LinuxNetL3 >>> >>> >>> >>> Thanks, >>> >>> Rick >>> >>> >>> >>> _Richard Minton_ >>> >>> LMICC Systems Administrator >>> >>> 4000 Geerdes Blvd, 13D31 >>> >>> King of Prussia, PA 19406 >>> >>> Phone: 610-354-5482 >>> >>> >>> >>> >>> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >>> From acathrow at redhat.com Fri Apr 19 16:11:46 2013 From: acathrow at redhat.com (Andrew Cathrow) Date: Fri, 19 Apr 2013 12:11:46 -0400 (EDT) Subject: [rhos-list] RHOS and Ceph In-Reply-To: <517107AA.7010106@cern.ch> References: <5170DE20.30304@redhat.com> <517107AA.7010106@cern.ch> Message-ID: <975190697.3571048.1366387906145.JavaMail.root@redhat.com> Support for ceph is being discussed. Have you tried Gluster - does that address your use case? Aic ----- Original Message ----- > From: "Thomas Oulevey" > To: rhos-list at redhat.com > Sent: Friday, April 19, 2013 5:00:26 AM > Subject: Re: [rhos-list] RHOS and Ceph > Hi, > On 04/19/2013 08:03 AM, Steven Ellis wrote: > > So I've got a customer very excited about our RHOS announcements and RDO > > and > > they are a potential for our early adopter program. > > We are as well excited to get ceph working with RHEL6.X/RHEL7 > Frist step, the missing bit is the "qemu-kvm" package compile against rbd. > Does anyone can share some updates ? > (Feature request BZ921668.) > -- > Thomas. > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Fri Apr 19 16:36:59 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Fri, 19 Apr 2013 10:36:59 -0600 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <5170DE20.30304@redhat.com> References: <5170DE20.30304@redhat.com> Message-ID: <20130419103659.2cf75ec2@lembas.zaitcev.lan> On Fri, 19 Apr 2013 18:03:12 +1200 Steven Ellis wrote: > One of their key questions is when (note when, not if) will Red Hat be > shipping Ceph as part of their Enterprise Supported Open Stack > environment. From their perspective RHS isn't a suitable scalable > backend for all their Open Stack use cases, in particular high > performance I/O block Okay, since you ask, here's my take, as an engineer. Firstly, I would be interested in hearing more. If someone made up their mind in such terms there's no dissuading them. But if they have a rational basis for saying that "high performance I/O block" in Gluster is somehow deficient, it would be very interesting to learn the details. My sense of this is that we're quite unlikely to offer a support for Ceph any time soon. First, nobody so far presented a credible case for it, as far as I know, and second, we don't have the expertise. I saw cases like that before, in a sense that customers come to us and think they have all the answers and we better do as we're told. This is difficult because on the one hand customer is always right, but on the other hand we always stand behind our supported product. It happened with reiserfs and XFS. But we refused to support reiserfs, while we support XFS. The key difference is that reiserfs was junk, and XFS is not. That said, XFS took a very long time to establish -- years. We had to hire Dave Cinner to take care of it. Even if the case for Ceph gains arguments, it takes time to establish in-house expertise that we can offer as a valuable service to customers. Until that time selling Ceph would be irresponsible. The door is certainly open to it. Make a rational argument, be patient, and see what comes out. Note that a mere benchmark for "high performance I/O block" isn't going to cut it. Reiser was beating our preferred solution, ext3. But in the end we could not recommend a filesystem that ate customer data, and stuck with ext3 despite the lower performance. Not saying Ceph is junk at all, but you need a better argument against GlusterFS. -- Pete From joey at scare.org Fri Apr 19 17:10:33 2013 From: joey at scare.org (Joey McDonald) Date: Fri, 19 Apr 2013 11:10:33 -0600 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <20130419103659.2cf75ec2@lembas.zaitcev.lan> References: <5170DE20.30304@redhat.com> <20130419103659.2cf75ec2@lembas.zaitcev.lan> Message-ID: Simply enabling support for it is not the same as supporting it. Ceph is already supported via the cephfs fuse-based file system. I think the concepts are similar. Two things are needed: kernel module for rbd and ceph hooks in kvm. Then, let the ceph community offer 'support'. Is this not what was done for gluster before they were acquired? It is Linux after all... kumbaya. On Fri, Apr 19, 2013 at 10:36 AM, Pete Zaitcev wrote: > On Fri, 19 Apr 2013 18:03:12 +1200 > Steven Ellis wrote: > > > One of their key questions is when (note when, not if) will Red Hat be > > shipping Ceph as part of their Enterprise Supported Open Stack > > environment. From their perspective RHS isn't a suitable scalable > > backend for all their Open Stack use cases, in particular high > > performance I/O block > > Okay, since you ask, here's my take, as an engineer. > > Firstly, I would be interested in hearing more. If someone made up their > mind in such terms there's no dissuading them. But if they have a rational > basis for saying that "high performance I/O block" in Gluster is somehow > deficient, it would be very interesting to learn the details. > > My sense of this is that we're quite unlikely to offer a support > for Ceph any time soon. First, nobody so far presented a credible case > for it, as far as I know, and second, we don't have the expertise. > > I saw cases like that before, in a sense that customers come to us and > think they have all the answers and we better do as we're told. > This is difficult because on the one hand customer is always right, > but on the other hand we always stand behind our supported product. > It happened with reiserfs and XFS. But we refused to support reiserfs, > while we support XFS. The key difference is that reiserfs was junk, > and XFS is not. > > That said, XFS took a very long time to establish -- years. We had to > hire Dave Cinner to take care of it. Even if the case for Ceph gains > arguments, it takes time to establish in-house expertise that we can > offer as a valuable service to customers. Until that time selling > Ceph would be irresponsible. > > The door is certainly open to it. Make a rational argument, be patient, > and see what comes out. > > Note that a mere benchmark for "high performance I/O block" isn't going > to cut it. Reiser was beating our preferred solution, ext3. But in the > end we could not recommend a filesystem that ate customer data, and stuck > with ext3 despite the lower performance. Not saying Ceph is junk at all, > but you need a better argument against GlusterFS. > > -- Pete > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Fri Apr 19 18:16:23 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 19 Apr 2013 14:16:23 -0400 Subject: [rhos-list] RHOS and Ceph In-Reply-To: Message-ID: <517189fe.c893e00a.7d48.ffffd6f0@mx.google.com> An HTML attachment was scrubbed... URL: From sellis at redhat.com Fri Apr 19 21:46:41 2013 From: sellis at redhat.com (Steven Ellis) Date: Sat, 20 Apr 2013 09:46:41 +1200 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <517189fe.c893e00a.7d48.ffffd6f0@mx.google.com> References: <517189fe.c893e00a.7d48.ffffd6f0@mx.google.com> Message-ID: <5171BB41.3030506@redhat.com> Wow some great discussion. I'm with Paul. Lets look at some real SAN hardware for big I/O at the moment. A lot of customers already have that for their existing VMware / RHEV backends. Then RHS (Gluster) is a great fit for object and other lower I/O use cases. After being at Linux.conf.au back in January there was a great deal of perception that Ceph is the default or is required for OpenStack and it can be quite a struggle to overcome that perception once it takes hold. I'm open to other suggestions for positioning RHOS on different storage backends. Steve On 04/20/2013 06:16 AM, Paul Robert Marino wrote: > Um hum > If you want hi block level IO performance why not use one of the many > SAN or NAS drivers? Grizzly has quite a few of them, and honestly > that's the only way you will get any real IO performance. > > > > -- Sent from my HP Pre3 > > ------------------------------------------------------------------------ > On Apr 19, 2013 1:11 PM, Joey McDonald wrote: > > Simply enabling support for it is not the same as supporting it. Ceph > is already supported via the cephfs fuse-based file system. I think > the concepts are similar. > > Two things are needed: kernel module for rbd and ceph hooks in kvm. > Then, let the ceph community offer 'support'. > > Is this not what was done for gluster before they were acquired? It is > Linux after all... kumbaya. > > > > On Fri, Apr 19, 2013 at 10:36 AM, Pete Zaitcev > wrote: > > On Fri, 19 Apr 2013 18:03:12 +1200 > Steven Ellis > wrote: > > > One of their key questions is when (note when, not if) will Red > Hat be > > shipping Ceph as part of their Enterprise Supported Open Stack > > environment. From their perspective RHS isn't a suitable scalable > > backend for all their Open Stack use cases, in particular high > > performance I/O block > > Okay, since you ask, here's my take, as an engineer. > > Firstly, I would be interested in hearing more. If someone made up > their > mind in such terms there's no dissuading them. But if they have a > rational > basis for saying that "high performance I/O block" in Gluster is > somehow > deficient, it would be very interesting to learn the details. > > My sense of this is that we're quite unlikely to offer a support > for Ceph any time soon. First, nobody so far presented a credible case > for it, as far as I know, and second, we don't have the expertise. > > I saw cases like that before, in a sense that customers come to us and > think they have all the answers and we better do as we're told. > This is difficult because on the one hand customer is always right, > but on the other hand we always stand behind our supported product. > It happened with reiserfs and XFS. But we refused to support reiserfs, > while we support XFS. The key difference is that reiserfs was junk, > and XFS is not. > > That said, XFS took a very long time to establish -- years. We had to > hire Dave Cinner to take care of it. Even if the case for Ceph gains > arguments, it takes time to establish in-house expertise that we can > offer as a valuable service to customers. Until that time selling > Ceph would be irresponsible. > > The door is certainly open to it. Make a rational argument, be > patient, > and see what comes out. > > Note that a mere benchmark for "high performance I/O block" isn't > going > to cut it. Reiser was beating our preferred solution, ext3. But in the > end we could not recommend a filesystem that ate customer data, > and stuck > with ext3 despite the lower performance. Not saying Ceph is junk > at all, > but you need a better argument against GlusterFS. > > -- Pete > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -- Steven Ellis Solution Architect - Red Hat New Zealand *T:* +64 9 927 8856 *M:* +64 21 321 673 *E:* sellis at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sellis at redhat.com Sat Apr 20 02:40:44 2013 From: sellis at redhat.com (Steven Ellis) Date: Sat, 20 Apr 2013 14:40:44 +1200 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <975190697.3571048.1366387906145.JavaMail.root@redhat.com> References: <5170DE20.30304@redhat.com> <517107AA.7010106@cern.ch> <975190697.3571048.1366387906145.JavaMail.root@redhat.com> Message-ID: <5172002C.6050806@redhat.com> Gluster / RHS is actually spot on for the object and file centric use cases at present. And then traditional SAN for the high I/O block device. Steve On 04/20/2013 04:11 AM, Andrew Cathrow wrote: > Support for ceph is being discussed. > Have you tried Gluster - does that address your use case? > > Aic > > > ------------------------------------------------------------------------ > > *From: *"Thomas Oulevey" > *To: *rhos-list at redhat.com > *Sent: *Friday, April 19, 2013 5:00:26 AM > *Subject: *Re: [rhos-list] RHOS and Ceph > > Hi, > > On 04/19/2013 08:03 AM, Steven Ellis wrote: > > So I've got a customer very excited about our RHOS > announcements and RDO and they are a potential for our early > adopter program. > > > We are as well excited to get ceph working with RHEL6.X/RHEL7 > Frist step, the missing bit is the "qemu-kvm" package compile > against rbd. > > Does anyone can share some updates ? > > (Feature request BZ921668.) > > -- > Thomas. > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.oulevey at cern.ch Mon Apr 22 07:51:53 2013 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Mon, 22 Apr 2013 09:51:53 +0200 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <5171BB41.3030506@redhat.com> References: <517189fe.c893e00a.7d48.ffffd6f0@mx.google.com> <5171BB41.3030506@redhat.com> Message-ID: <5174EC19.6000709@cern.ch> Hi, We evaluated glusterfs for our Openstack use case, as we did with proprietary NAS and we would like to do with Ceph. I don't expect support from Redhat but I think it's nice to keep your option open if you get few customers requests. I quickly looked into qemu-kvm source and with over 2850 patches applied to 0.12 sources I don't know how complex it is to backport rbd support. After the last week Openstack Summit we will get probably more information on future plan for all vendors (redhat, inktank). Now on RHS one of the requirement that kill it for VMs storage IMHO (IO apart, but high hope with future version and glusterfs 3.4) is the requirement of RAID6(+xfs). It means a lot of redundancy (and high cost, not mentioning hardware RAID card reliability) when you want to run VMs with a 3 replica volumes for a big cloud (think over 2000 hypervisor). Some translators are going in this direction to get a kind of networked Raid5 so let see what will be integrated in next RHS. Btw, Glusterfs 3.4 when released will have the same issue for block storage testing, a newer version of qemu is needed. (BZ 848070) For the NAS/SAN proprietary solution, keep in mind nobody is interested in vendor lock-in especially at scale. (cost, renewal of contract/provider, specific tools, etc...) Finally, I completely understand Redhat resources are not unlimited but with RHEL7 coming, it's a good opportunity to ask for features. More hardware/software support in the stock OS, happier we are :) Thomas. On 04/19/2013 11:46 PM, Steven Ellis wrote: > Wow some great discussion. > > I'm with Paul. Lets look at some real SAN hardware for big I/O at the > moment. A lot of customers already have that for their existing VMware > / RHEV backends. > > Then RHS (Gluster) is a great fit for object and other lower I/O use > cases. > > After being at Linux.conf.au back in January there was a great deal of > perception that Ceph is the default or is required for OpenStack and > it can be quite a struggle to overcome that perception once it takes hold. > > I'm open to other suggestions for positioning RHOS on different > storage backends. > > Steve > > On 04/20/2013 06:16 AM, Paul Robert Marino wrote: >> Um hum >> If you want hi block level IO performance why not use one of the many >> SAN or NAS drivers? Grizzly has quite a few of them, and honestly >> that's the only way you will get any real IO performance. >> >> >> >> -- Sent from my HP Pre3 >> >> ------------------------------------------------------------------------ >> On Apr 19, 2013 1:11 PM, Joey McDonald wrote: >> >> Simply enabling support for it is not the same as supporting it. Ceph >> is already supported via the cephfs fuse-based file system. I think >> the concepts are similar. >> >> Two things are needed: kernel module for rbd and ceph hooks in kvm. >> Then, let the ceph community offer 'support'. >> >> Is this not what was done for gluster before they were acquired? It >> is Linux after all... kumbaya. >> >> >> >> On Fri, Apr 19, 2013 at 10:36 AM, Pete Zaitcev > > wrote: >> >> On Fri, 19 Apr 2013 18:03:12 +1200 >> Steven Ellis > wrote: >> >> > One of their key questions is when (note when, not if) will Red >> Hat be >> > shipping Ceph as part of their Enterprise Supported Open Stack >> > environment. From their perspective RHS isn't a suitable scalable >> > backend for all their Open Stack use cases, in particular high >> > performance I/O block >> >> Okay, since you ask, here's my take, as an engineer. >> >> Firstly, I would be interested in hearing more. If someone made >> up their >> mind in such terms there's no dissuading them. But if they have a >> rational >> basis for saying that "high performance I/O block" in Gluster is >> somehow >> deficient, it would be very interesting to learn the details. >> >> My sense of this is that we're quite unlikely to offer a support >> for Ceph any time soon. First, nobody so far presented a credible >> case >> for it, as far as I know, and second, we don't have the expertise. >> >> I saw cases like that before, in a sense that customers come to >> us and >> think they have all the answers and we better do as we're told. >> This is difficult because on the one hand customer is always right, >> but on the other hand we always stand behind our supported product. >> It happened with reiserfs and XFS. But we refused to support >> reiserfs, >> while we support XFS. The key difference is that reiserfs was junk, >> and XFS is not. >> >> That said, XFS took a very long time to establish -- years. We had to >> hire Dave Cinner to take care of it. Even if the case for Ceph gains >> arguments, it takes time to establish in-house expertise that we can >> offer as a valuable service to customers. Until that time selling >> Ceph would be irresponsible. >> >> The door is certainly open to it. Make a rational argument, be >> patient, >> and see what comes out. >> >> Note that a mere benchmark for "high performance I/O block" isn't >> going >> to cut it. Reiser was beating our preferred solution, ext3. But >> in the >> end we could not recommend a filesystem that ate customer data, >> and stuck >> with ext3 despite the lower performance. Not saying Ceph is junk >> at all, >> but you need a better argument against GlusterFS. >> >> -- Pete >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > > > -- > Steven Ellis > Solution Architect - Red Hat New Zealand > *T:* +64 9 927 8856 > *M:* +64 21 321 673 > *E:* sellis at redhat.com > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Mon Apr 22 13:42:13 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Mon, 22 Apr 2013 09:42:13 -0400 Subject: [rhos-list] RHOS and Ceph In-Reply-To: <5174EC19.6000709@cern.ch> References: <517189fe.c893e00a.7d48.ffffd6f0@mx.google.com> <5171BB41.3030506@redhat.com> <5174EC19.6000709@cern.ch> Message-ID: look storage is rated by 5 categories at most you get good ratings on any storage device on 2 maybe 3 on an above average implementation. 1) ease of implementation 2) speed 3) economy of cost 4) density 5) flexibility speed also comes in two categories 1) IO operations per second 2) shear bandwidth As far as the vendor lock in goes thats the whole point of the abstraction in openstack. You can buy a different SAN or NAS device from one year to the next and it will be transparent to the users of the storage. or if you want you can make your own as long as there is a driver in openstack for it. Support vendors can't try to do every thing or they will do nothing well. Ceph is a non option right now for Redhat and frankly if It was included I would expect to be able to file a bug report on any issues I had with it and follow up with a support ticket to get the problem fixed so in responce to an earlier statement "Simply enabling support for it is not the same as supporting it" YES IT DOES. On Mon, Apr 22, 2013 at 3:51 AM, Thomas Oulevey wrote: > Hi, > > We evaluated glusterfs for our Openstack use case, as we did with > proprietary NAS and we would like to do with Ceph. > I don't expect support from Redhat but I think it's nice to keep your > option open if you get few customers requests. > I quickly looked into qemu-kvm source and with over 2850 patches applied > to 0.12 sources I don't know how complex > it is to backport rbd support. > After the last week Openstack Summit we will get probably more information > on future plan for all vendors (redhat, inktank). > > Now on RHS one of the requirement that kill it for VMs storage IMHO (IO > apart, but high hope with future version and glusterfs 3.4) > is the requirement of RAID6(+xfs). It means a lot of redundancy (and high > cost, not mentioning hardware RAID card reliability) when you want to run > VMs with a 3 replica volumes for a big cloud (think over 2000 hypervisor). > Some translators are going in this direction to get a kind of networked > Raid5 so let see what will be integrated in next RHS. > > Btw, Glusterfs 3.4 when released will have the same issue for block > storage testing, a newer version of qemu is needed. (BZ 848070) > > For the NAS/SAN proprietary solution, keep in mind nobody is interested in > vendor lock-in especially at scale. (cost, renewal of contract/provider, > specific tools, etc...) > > Finally, I completely understand Redhat resources are not unlimited but > with RHEL7 coming, it's a good opportunity to ask for features. > More hardware/software support in the stock OS, happier we are :) > > Thomas. > > On 04/19/2013 11:46 PM, Steven Ellis wrote: > > Wow some great discussion. > > I'm with Paul. Lets look at some real SAN hardware for big I/O at the > moment. A lot of customers already have that for their existing VMware / > RHEV backends. > > Then RHS (Gluster) is a great fit for object and other lower I/O use cases. > > After being at Linux.conf.au back in January there was a great deal of > perception that Ceph is the default or is required for OpenStack and it can > be quite a struggle to overcome that perception once it takes hold. > > I'm open to other suggestions for positioning RHOS on different storage > backends. > > Steve > > On 04/20/2013 06:16 AM, Paul Robert Marino wrote: > > Um hum > If you want hi block level IO performance why not use one of the many SAN > or NAS drivers? Grizzly has quite a few of them, and honestly that's the > only way you will get any real IO performance. > > > > -- Sent from my HP Pre3 > > ------------------------------ > On Apr 19, 2013 1:11 PM, Joey McDonald wrote: > > Simply enabling support for it is not the same as supporting it. Ceph is > already supported via the cephfs fuse-based file system. I think the > concepts are similar. > > Two things are needed: kernel module for rbd and ceph hooks in kvm. > Then, let the ceph community offer 'support'. > > Is this not what was done for gluster before they were acquired? It is > Linux after all... kumbaya. > > > > On Fri, Apr 19, 2013 at 10:36 AM, Pete Zaitcev wrote: > >> On Fri, 19 Apr 2013 18:03:12 +1200 >> Steven Ellis wrote: >> >> > One of their key questions is when (note when, not if) will Red Hat be >> > shipping Ceph as part of their Enterprise Supported Open Stack >> > environment. From their perspective RHS isn't a suitable scalable >> > backend for all their Open Stack use cases, in particular high >> > performance I/O block >> >> Okay, since you ask, here's my take, as an engineer. >> >> Firstly, I would be interested in hearing more. If someone made up their >> mind in such terms there's no dissuading them. But if they have a rational >> basis for saying that "high performance I/O block" in Gluster is somehow >> deficient, it would be very interesting to learn the details. >> >> My sense of this is that we're quite unlikely to offer a support >> for Ceph any time soon. First, nobody so far presented a credible case >> for it, as far as I know, and second, we don't have the expertise. >> >> I saw cases like that before, in a sense that customers come to us and >> think they have all the answers and we better do as we're told. >> This is difficult because on the one hand customer is always right, >> but on the other hand we always stand behind our supported product. >> It happened with reiserfs and XFS. But we refused to support reiserfs, >> while we support XFS. The key difference is that reiserfs was junk, >> and XFS is not. >> >> That said, XFS took a very long time to establish -- years. We had to >> hire Dave Cinner to take care of it. Even if the case for Ceph gains >> arguments, it takes time to establish in-house expertise that we can >> offer as a valuable service to customers. Until that time selling >> Ceph would be irresponsible. >> >> The door is certainly open to it. Make a rational argument, be patient, >> and see what comes out. >> >> Note that a mere benchmark for "high performance I/O block" isn't going >> to cut it. Reiser was beating our preferred solution, ext3. But in the >> end we could not recommend a filesystem that ate customer data, and stuck >> with ext3 despite the lower performance. Not saying Ceph is junk at all, >> but you need a better argument against GlusterFS. >> >> -- Pete >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> > > > > _______________________________________________ > rhos-list mailing listrhos-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rhos-list > > > > -- > Steven Ellis > Solution Architect - Red Hat New Zealand > *T:* +64 9 927 8856 > *M:* +64 21 321 673 > *E:* sellis at redhat.com > > > _______________________________________________ > rhos-list mailing listrhos-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rhos-list > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Mon Apr 22 20:52:15 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Mon, 22 Apr 2013 20:52:15 +0000 Subject: [rhos-list] Problem launching a VM. Message-ID: Anybody have a clue what this is all about? I launch a VM, it gets an IP, spins at 'spawning' for a while, goes back to 'networking', then fails with this error in the "compute.log". It looks like something with the console. I'm running Folsom with Quantum Networking. One compute/controller node and one networking node. 2013-04-22 16:42:03 ERROR nova.compute.manager [req-76769e3b-01cb-4e19-877b-67bfa996d6b2 89a23126dc714d8589b169aa15435aa8 222092c4ef21472a9c76d9b54fad8f1c] [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] Instance failed to spawn 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] Traceback (most recent call last): 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] block_device_info) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] temp_level, payload) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] self.gen.next() 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] return f(*args, **kw) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1106, in spawn 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] block_device_info) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1930, in _create_domain_and_network 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] domain = self._create_domain(xml) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1903, in _create_domain 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] domain.createWithFlags(launch_flags) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] result = proxy_call(self._autowrap, f, *args, **kwargs) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] rv = execute(f,*args,**kwargs) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] rv = meth(*args,**kwargs) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] libvirtError: internal error Process exited while reading console log output: chardev: opening backend "file" failed 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] 2013-04-22 16:42:03 INFO nova.compute.resource_tracker [req-76769e3b-01cb-4e19-877b-67bfa996d6b2 89a23126dc714d8589b169aa15435aa8 222092c4ef21472a9c76d9b54fad8f1c] Aborting claim: [Claim a7f49b0e-df2e-47b7-af7d-73d76efd0c77: 4096 MB memory, 40 GB disk, 2 VCPUS] 2013-04-22 16:42:03 ERROR nova.compute.manager [req-76769e3b-01cb-4e19-877b-67bfa996d6b2 89a23126dc714d8589b169aa15435aa8 222092c4ef21472a9c76d9b54fad8f1c] [instance: a7f49b0e-df2e-47b7-af7d-73d76efd0c77] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 503, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1106, in spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1930, in _create_domain_and_network\n domain = self._create_domain(xml)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1903, in _create_domain\n domain.createWithFlags(launch_flags)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call\n rv = execute(f,*args,**kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\n rv = meth(*args,**kwargs)\n', ' File "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags\n if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', dom=self)\n', 'libvirtError: internal error Process exited while reading console log output: chardev: opening backend "file" failed\n\n'] Thank you! Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Mon Apr 22 21:02:15 2013 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 22 Apr 2013 17:02:15 -0400 (EDT) Subject: [rhos-list] Problem launching a VM. In-Reply-To: References: Message-ID: <124567397.677515.1366664535033.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Rich Minton" > To: rhos-list at redhat.com > Sent: Monday, April 22, 2013 4:52:15 PM > Subject: [rhos-list] Problem launching a VM. > > Anybody have a clue what this is all about? I launch a VM, it gets an IP, > spins at 'spawning' for a while, goes back to 'networking', then fails with > this error in the "compute.log". It looks like something with the console. > I'm running Folsom with Quantum Networking. One compute/controller node and > one networking node. Is the backing storage on NFS? It looks like the error returned when the SELinux virt_use_nfs boolean hasn't been set to true (I believe I have an open docs bug on this but it made the release notes for now). Thanks, Steve > 2013-04-22 16:42:03 ERROR nova.compute.manager > [req-76769e3b-01cb-4e19-877b-67bfa996d6b2 89a23126dc714d8589b169aa15435aa8 > 222092c4ef21472a9c76d9b54fad8f1c] [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] Instance failed to spawn > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] Traceback (most recent call last): > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in > _spawn > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] block_device_info) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] temp_level, payload) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] self.gen.next() > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] return f(*args, **kw) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1106, > in spawn > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] block_device_info) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1930, > in _create_domain_and_network > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] domain = self._create_domain(xml) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1903, > in _create_domain > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] > domain.createWithFlags(launch_flags) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] result = > proxy_call(self._autowrap, f, *args, **kwargs) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in > proxy_call > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] rv = execute(f,*args,**kwargs) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] rv = meth(*args,**kwargs) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] File > "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in > createWithFlags > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] if ret == -1: raise libvirtError > ('virDomainCreateWithFlags() failed', dom=self) > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] libvirtError: internal error Process > exited while reading console log output: chardev: opening backend "file" > failed > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] > 2013-04-22 16:42:03 486 TRACE nova.compute.manager [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] > 2013-04-22 16:42:03 INFO nova.compute.resource_tracker > [req-76769e3b-01cb-4e19-877b-67bfa996d6b2 89a23126dc714d8589b169aa15435aa8 > 222092c4ef21472a9c76d9b54fad8f1c] Aborting claim: [Claim > a7f49b0e-df2e-47b7-af7d-73d76efd0c77: 4096 MB memory, 40 GB disk, 2 VCPUS] > 2013-04-22 16:42:03 ERROR nova.compute.manager > [req-76769e3b-01cb-4e19-877b-67bfa996d6b2 89a23126dc714d8589b169aa15435aa8 > 222092c4ef21472a9c76d9b54fad8f1c] [instance: > a7f49b0e-df2e-47b7-af7d-73d76efd0c77] Build error: ['Traceback (most recent > call last):\n', ' File > "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 503, in > _run_instance\n injected_files, admin_password)\n', ' File > "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in > _spawn\n block_device_info)\n', ' File > "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n > temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", > line 23, in __exit__\n self.gen.next()\n', ' File > "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n > return f(*args, **kw)\n', ' File > "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1106, > in spawn\n block_device_info)\n', ' File > "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1930, > in _create_domain_and_network\n domain = self._create_domain(xml)\n', ' > File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line > 1903, in _create_domain\n domain.createWithFlags(launch_flags)\n', ' > File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in > doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', ' > File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in > proxy_call\n rv = execute(f,*args,**kwargs)\n', ' File > "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\n > rv = meth(*args,**kwargs)\n', ' File > "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in > createWithFlags\n if ret == -1: raise libvirtError > (\'virDomainCreateWithFlags() failed\', dom=self)\n', 'libvirtError: > internal error Process exited while reading console log output: chardev: > opening backend "file" failed\n\n'] > > Thank you! > > > Richard Minton > LMICC Systems Administrator > 4000 Geerdes Blvd, 13D31 > King of Prussia, PA 19406 > Phone: 610-354-5482 > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From ayoung at redhat.com Sat Apr 27 01:13:05 2013 From: ayoung at redhat.com (Adam Young) Date: Fri, 26 Apr 2013 21:13:05 -0400 Subject: [rhos-list] Announcing RDO In-Reply-To: <516EF607.2030506@redhat.com> References: <516EF607.2030506@redhat.com> Message-ID: <517B2621.3030606@redhat.com> On 04/17/2013 03:20 PM, Dave Neary wrote: > Hi everyone, > > The RDO community site is now live! > > RDO is Red Hat's community-supported distribution of OpenStack > for Red Hat Enterprise Linux and its clones, and for Fedora. The site is > now online at: > > http://openstack.redhat.com > > What we've announced is two things: > > * We are providing well integrated, easy to install packages of > OpenStack Grizzly for Red Hat Enterprise Linux 6.4, and equivalent > versions of CentOS, Scientific Linux, etc, and for Fedora 18. > * We have released a website at openstack.redhat.com to grow a community > of OpenStack users on Red Hat platforms > > If you are interested in trying out OpenStack Grizzly on RHEL, or other > Enterprise Linux distributions, then you are welcome to install it, join > our forums and share your experiences. For those who prefer mailing > lists to forums, we also have a mailing list, rdo-list: > https://www.redhat.com/mailman/listinfo/rdo-list > > What does this mean for Red Hat OpenStack users, and subscribers to > rhos-list? > > The short answer is that this adds a new option for you. If you would > like to install a community supported OpenStack Grizzly distribution on > Red Hat Enterprise Linux, CentOS or Scientific Linux in anticipation of > a future Red Hat supported Grizzly-based product, then RDO is a good > choice. If you are interested in deploying enterprise-hardened Folsom on > Red Hat Enterprise Linux, then Red Hat OpenStack early adopter Edition > is a great choice, and rhos-list is the best place to get help with that. > > You can read more about the RDO announcement at > http://www.redhat.com/about/news/press-archive/2013/4/red-hat-advances-its-openstack-enterprise-and-community-technologies-and-roadmap > > Thanks, > Dave. > Dave, There is a Keystone discussion and I am not able to respond to it: http://openstack.redhat.com/forum/discussion/54/grizzly-install-failed/p1 Is there some higher level permission I need? My username is admiyo and email ayoung at redhat.com . I'd like to be able to respond in forum, but I see at the bottom of the page "Commenting not allowed" From ayoung at redhat.com Sat Apr 27 01:15:19 2013 From: ayoung at redhat.com (Adam Young) Date: Fri, 26 Apr 2013 21:15:19 -0400 Subject: [rhos-list] Announcing RDO In-Reply-To: <517B2621.3030606@redhat.com> References: <516EF607.2030506@redhat.com> <517B2621.3030606@redhat.com> Message-ID: <517B26A7.80402@redhat.com> On 04/26/2013 09:13 PM, Adam Young wrote: > On 04/17/2013 03:20 PM, Dave Neary wrote: >> Hi everyone, >> >> The RDO community site is now live! >> >> RDO is Red Hat's community-supported distribution of OpenStack >> for Red Hat Enterprise Linux and its clones, and for Fedora. The site is >> now online at: >> >> http://openstack.redhat.com >> >> What we've announced is two things: >> >> * We are providing well integrated, easy to install packages of >> OpenStack Grizzly for Red Hat Enterprise Linux 6.4, and equivalent >> versions of CentOS, Scientific Linux, etc, and for Fedora 18. >> * We have released a website at openstack.redhat.com to grow a community >> of OpenStack users on Red Hat platforms >> >> If you are interested in trying out OpenStack Grizzly on RHEL, or other >> Enterprise Linux distributions, then you are welcome to install it, join >> our forums and share your experiences. For those who prefer mailing >> lists to forums, we also have a mailing list, rdo-list: >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> What does this mean for Red Hat OpenStack users, and subscribers to >> rhos-list? >> >> The short answer is that this adds a new option for you. If you would >> like to install a community supported OpenStack Grizzly distribution on >> Red Hat Enterprise Linux, CentOS or Scientific Linux in anticipation of >> a future Red Hat supported Grizzly-based product, then RDO is a good >> choice. If you are interested in deploying enterprise-hardened Folsom on >> Red Hat Enterprise Linux, then Red Hat OpenStack early adopter Edition >> is a great choice, and rhos-list is the best place to get help with >> that. >> >> You can read more about the RDO announcement at >> http://www.redhat.com/about/news/press-archive/2013/4/red-hat-advances-its-openstack-enterprise-and-community-technologies-and-roadmap >> >> >> Thanks, >> Dave. >> > Dave, > > There is a Keystone discussion and I am not able to respond to it: > http://openstack.redhat.com/forum/discussion/54/grizzly-install-failed/p1 > Is there some higher level permission I need? My username is admiyo > and email ayoung at redhat.com . I'd like to be able to respond in > forum, but I see at the bottom of the page "Commenting not allowed" Disregard. Once I responded to the verification email I could respond. > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From rich.minton at lmco.com Tue Apr 30 16:54:49 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Tue, 30 Apr 2013 16:54:49 +0000 Subject: [rhos-list] Quantum Metadata service Message-ID: Regarding Metadata and Openstack Networking (Quantum), is it necessary to have the L3-agent running in order to access metadata from a VM? Also, the Openstack Networking documentation says to add the following to nova.conf: firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = quantum service_quantum_metadata_proxy = true quantum_metadata_proxy_shared_secret = "password" network_api_class = nova.network.quantumv2.api.API Also, if quantum proxies calls to metadata, do I still need this line: enabled_apis=ec2,osapi_compute,metadata Basically do I need to add these to every compute node and is this all I need to get metadata service up and running? Thanks for the help. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: