From rohara at redhat.com Tue Apr 1 19:56:28 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Tue, 1 Apr 2014 14:56:28 -0500 Subject: [Rdo-list] RDO and MariaDB+Galera Message-ID: <20140401195628.GB26241@redhat.com> I've been working on getting mariadb+galera working with RDO, and we now have packages build in COPR. You can get F20 packages here [1]. RHEL6 packages will be rebuilt as soon as I fix a couple issues in the spec file. I've also submitted two new package reviews for inclusion in Fedora [2,3]. The packaging here is a bit tricky because of the conflicts with mariadb. As the packages exist today, most of the mariadb-galera subpackages will conflict with the mariadb subpackages. If you're so inclined, please take time to review the packages. As packages make there way into our repos, packstack will soon deploy mariabdb-galera-server by default. For now, it will not deploy a galera cluster, so there should be no impact. For those that want to deploy a galera cluster, the process would simply be a matter of installing mariabdb-galera-server on additional machines and adding appropriate cluster configuration. Ryan [1] http://copr.fedoraproject.org/coprs/rohara/mariadb-galera-fedora-20/ [2] https://bugzilla.redhat.com/show_bug.cgi?id=1083232 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1083234 From morazi at redhat.com Tue Apr 1 21:53:59 2014 From: morazi at redhat.com (Mike Orazi) Date: Tue, 01 Apr 2014 17:53:59 -0400 Subject: [Rdo-list] Firewall issue/error when spawning instances on compute node In-Reply-To: References: , Message-ID: <533B3577.3010408@redhat.com> On 03/31/2014 07:29 AM, St. George, Allan L. wrote: > Sorry, not using Horizon. Currently just trying to get a reliable > OpenStack deployment via Foreman. > > What I am experiencing is neutron-dhcp-agent will occasionally stop > assigning addresses to spawning instances and I'll have to restart the > service. Nothing is noted in the log. > > - Allan This is something of a shot in the dark, but I wonder if you might be hitting an issue around the recently change dhcp_lease_duration. It might be worth trying to apply the change: https://review.openstack.org/#/c/84150/ and seeing if that has any impact on the issue. Thanks, Mike From kimbyeonggi at gmail.com Wed Apr 2 01:51:47 2014 From: kimbyeonggi at gmail.com (BYEONG-GI KIM) Date: Wed, 2 Apr 2014 10:51:47 +0900 Subject: [Rdo-list] PackStack with multiple Compute Node. Message-ID: Hello. I installed Packstack for Openstack allinone installation, and I'm now trying to add several compute nodes using different PCs. According to http://openstack.redhat.com/Adding_a_compute_node, the additional Compute Nodes seems to connect with the PC what allinone has been installed by PackStack via a LAN; Data Network and Management Network described in the OpenStack Havana 3 node (Network Node, Compute Node, and Controller Node) setup guide look like sharing the LAN cable. Conceptually, two networks need to use different port. Do the additional Compute Nodes use a LAN as a trunk port to distinguish between the networks? Or could you explain how does the multiple connection in PackStack work? Thanks in advance! Byeong-Gi Kim -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Wed Apr 2 09:32:54 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 02 Apr 2014 10:32:54 +0100 Subject: [Rdo-list] RDO and MariaDB+Galera In-Reply-To: <20140401195628.GB26241@redhat.com> References: <20140401195628.GB26241@redhat.com> Message-ID: <533BD946.3080108@redhat.com> On 04/01/2014 08:56 PM, Ryan O'Hara wrote: > > I've been working on getting mariadb+galera working with RDO, and we > now have packages build in COPR. You can get F20 packages here [1]. > > RHEL6 packages will be rebuilt as soon as I fix a couple issues in the > spec file. > > I've also submitted two new package reviews for inclusion in > Fedora [2,3]. The packaging here is a bit tricky because of the > conflicts with mariadb. As the packages exist today, most of the > mariadb-galera subpackages will conflict with the mariadb > subpackages. If you're so inclined, please take time to review the > packages. > > As packages make there way into our repos, packstack will soon deploy > mariabdb-galera-server by default. For now, it will not deploy a > galera cluster, so there should be no impact. For those that want to > deploy a galera cluster, the process would simply be a matter of > installing mariabdb-galera-server on additional machines and adding > appropriate cluster configuration. > > Ryan > > [1] http://copr.fedoraproject.org/coprs/rohara/mariadb-galera-fedora-20/ > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1083232 > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1083234 Thanks for the work and status. Can you request access to the required branches, and build here: https://copr.fedoraproject.org/coprs/jruzicka/ We're consolidating copr builds there for now, and we can automatically pull from there to RDO. thanks, P?draig. From amuller at redhat.com Wed Apr 2 10:37:41 2014 From: amuller at redhat.com (Assaf Muller) Date: Wed, 2 Apr 2014 06:37:41 -0400 (EDT) Subject: [Rdo-list] PackStack with multiple Compute Node. In-Reply-To: References: Message-ID: <994045663.8471039.1396435061264.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hello. > Hi! > I installed Packstack for Openstack allinone installation, and I'm now trying > to add several compute nodes using different PCs. > > According to http://openstack.redhat.com/Adding_a_compute_node , the > additional Compute Nodes seems to connect with the PC what allinone has been > installed by PackStack via a LAN; Data Network and Management Network > described in the OpenStack Havana 3 node (Network Node, Compute Node, and > Controller Node) setup guide look like sharing the LAN cable. > > Conceptually, two networks need to use different port. Do the additional > Compute Nodes use a LAN as a trunk port to distinguish between the networks? > Or could you explain how does the multiple connection in PackStack work? > The easiest configuration (from Packstack's point of view) would be to use two different physical networks, with one NIC connected to each. So, for example, you could have eth0 of all nodes connected to the management network, and eth1 connected to the VM / data network. Otherwise, you'd use two VLAN devices over your single NIC (You'd have to configure this, Packstack doesn't do this for you). Then place eth0.100, eth0.200 in your answer file where appropriate. If you get any failures, you could post your answer file and I'll take a look. > Thanks in advance! > > Byeong-Gi Kim > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From amuller at redhat.com Wed Apr 2 12:13:41 2014 From: amuller at redhat.com (Assaf Muller) Date: Wed, 2 Apr 2014 08:13:41 -0400 (EDT) Subject: [Rdo-list] PackStack with multiple Compute Node. In-Reply-To: References: <994045663.8471039.1396435061264.JavaMail.zimbra@redhat.com> Message-ID: <442622805.8504672.1396440821038.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Thank you for the reply. > I forgot to ask you, what are you using for your tenant networks? VLANs, flat, GRE or VXLAN? > But I'd wanna know something more detail; I mean, how could the network be > constructed? > > Even if we use two different physical networks, the configuration seems not > easy to deploy. > > which files should be modified or configured at allinone PackStack? > You'd only have to change the Packstack answer file. You should run packstack --gen-answer-file=ans 'ans' will contain an answer file with all-in-one default values. You should add all of the compute nodes to the compute node list (Providing their management IP). If you're using VLANs then fill in the bridge mappings keys (You'll put in the second NIC there), or if you're using tunnels then specify the 2nd NIC as the tunnel endpoint. The RDO docs have more specifics. > Anyway, I really appreciate your kind. > > Best regards, > > Byeong-Gi Kim > > > 2014-04-02 19:37 GMT+09:00 Assaf Muller : > > > ----- Original Message ----- > > > Hello. > > > > > > > Hi! > > > > > I installed Packstack for Openstack allinone installation, and I'm now > > trying > > > to add several compute nodes using different PCs. > > > > > > According to http://openstack.redhat.com/Adding_a_compute_node , the > > > additional Compute Nodes seems to connect with the PC what allinone has > > been > > > installed by PackStack via a LAN; Data Network and Management Network > > > described in the OpenStack Havana 3 node (Network Node, Compute Node, and > > > Controller Node) setup guide look like sharing the LAN cable. > > > > > > Conceptually, two networks need to use different port. Do the additional > > > Compute Nodes use a LAN as a trunk port to distinguish between the > > networks? > > > Or could you explain how does the multiple connection in PackStack work? > > > > > > > The easiest configuration (from Packstack's point of view) would be to use > > two > > different physical networks, with one NIC connected to each. So, for > > example, > > you could have eth0 of all nodes connected to the management network, and > > eth1 > > connected to the VM / data network. > > > > Otherwise, you'd use two VLAN devices over your single NIC > > (You'd have to configure this, Packstack doesn't do this for you). > > Then place eth0.100, eth0.200 in your answer file where appropriate. > > > > If you get any failures, you could post your answer file and I'll take a > > look. > > > > > Thanks in advance! > > > > > > Byeong-Gi Kim > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > From rohara at redhat.com Wed Apr 2 13:28:15 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Wed, 2 Apr 2014 08:28:15 -0500 Subject: [Rdo-list] RDO and MariaDB+Galera In-Reply-To: <533BD946.3080108@redhat.com> References: <20140401195628.GB26241@redhat.com> <533BD946.3080108@redhat.com> Message-ID: <20140402132815.GB28298@redhat.com> On Wed, Apr 02, 2014 at 10:32:54AM +0100, P?draig Brady wrote: > On 04/01/2014 08:56 PM, Ryan O'Hara wrote: > > > > I've been working on getting mariadb+galera working with RDO, and we > > now have packages build in COPR. You can get F20 packages here [1]. > > > > RHEL6 packages will be rebuilt as soon as I fix a couple issues in the > > spec file. > > > > I've also submitted two new package reviews for inclusion in > > Fedora [2,3]. The packaging here is a bit tricky because of the > > conflicts with mariadb. As the packages exist today, most of the > > mariadb-galera subpackages will conflict with the mariadb > > subpackages. If you're so inclined, please take time to review the > > packages. > > > > As packages make there way into our repos, packstack will soon deploy > > mariabdb-galera-server by default. For now, it will not deploy a > > galera cluster, so there should be no impact. For those that want to > > deploy a galera cluster, the process would simply be a matter of > > installing mariabdb-galera-server on additional machines and adding > > appropriate cluster configuration. > > > > Ryan > > > > [1] http://copr.fedoraproject.org/coprs/rohara/mariadb-galera-fedora-20/ > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1083232 > > [3] https://bugzilla.redhat.com/show_bug.cgi?id=1083234 > > Thanks for the work and status. > Can you request access to the required branches, and build here: > https://copr.fedoraproject.org/coprs/jruzicka/ > We're consolidating copr builds there for now, > and we can automatically pull from there to RDO. I have access to the branches. I did a build in jruzicka's copr branches last week, but I need to update the build. Fixed a bug or two since then. This is top priority today, since I'm going to also push the change to o-p-m that makes mariadb-galera-server the default database server. Ryan From mtaylor at redhat.com Wed Apr 2 13:28:57 2014 From: mtaylor at redhat.com (Martyn Taylor) Date: Wed, 02 Apr 2014 14:28:57 +0100 Subject: [Rdo-list] [OFI] Speeding up OFI provisioning in Dev env Message-ID: <533C1099.9050208@redhat.com> If you are like me and you have a dog slow connection and you are wanting to test orchestration on using Staypuft, it can take forever and a day. The main issue for me was that the network installs were taking hours, if you are using Fedora etc... you can quite easily use a proxy like Squid to cache everything. However, I am using RHEL which is usually behind SSL. It's not impossible to get round this with squid, but you'll need to play around with man in the middle stuff. One quick and easy way to speed up network install for RHEL and any other distro, is just to download the DVD iso, mount it, then service it up via apache. You'll still need to do some public traffic as you'lll need to download all the OpenStack modules etc... but this has taken provision time of a host for me from about 90 mins to about 15mins. Since we are provisioning sequentially that makes a massive difference. Here are the steps involved: *1. Add a entry in /etc/hosts for your virtual host:* 192.168.100.1 mirrors.local *2. create a new virtual host file, /etc/httpd/conf.d/06-local-mirrors.conf* ServerName mirrors.local DocumentRoot /var/www/html/repos Options +Indexes *3. Download the your ISO of choice and mount it somewhere:* mkdir /mnt/rhel-iso mount -o loop /mnt/rhel-iso * **4. Create symlink to the ISO* mkdir /var/www/html/repos ln -s /mnt/rhel-iso /var/www/html/repos *5. Add your new install media in foreman as normal, adding the URL to your ISO* path: http://mirrors.local/rhel -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri Apr 4 20:16:38 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 04 Apr 2014 16:16:38 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Newsletter, April 2014 Message-ID: <533F1326.2060309@redhat.com> Happy Birthday RDO! Happy Birthday RDO! It's been a year since we launched the RDO project, in an attempt to make installing OpenStack on CentOS, Fedora, and Red Hat Enterprise Linux a more pleasant experience. In that year we've grown from nothing to over 2000 downloads a day, and our community has grown to thousands across mailing lists, IRC channel, Twitter, Google+, and ask.openstack.org. You can see some of the statistics around the RDO effort at http://openstack.redhat.com/stats/ compiled by our friends at Bitergia. Thank you all for your participation in this effort, and for your hard work in making it happen! Thanks for being part of the RDO community! Please Take the OpenStack Community Survey Once again, the OpenStack Foundation is conducting a survey of OpenStack users. Please take ten minutes to fill out the survey, so that we can get a clearer picture of who is using OpenStack, what they're using it for, and how they'll be using it in the coming year. Start the survey at https://www.openstack.org/user-survey/Login Hangouts Last week, Flavio Percoco gave us a great introduction to OpenStack Marconi. Marconi is a project to develop a messaging and notification service, which is undergoing OpenStack incubation. If you missed it, you can watch at https://www.youtube.com/watch?v=qe7qI3WY97c If you have followup questions, you can take them to the #openstack-marconi IRC channel on the Freenode network. You can see other past hangouts at http://openstack.redhat.com/Hangouts In April we'll be doing another hangout. This time the Heat team will be telling us about how you can use Heat to orchestrate your cloud. Watch http://openstack.redhat.com/Events for more details as they become available. April and May events April and May are exciting months for OpenStack, as we anticipate the OpenStack Icehouse release on April 17th, and the OpenStack Juno Summit starting May 12 in Atlanta - https://www.openstack.org/summit/openstack-summit-atlanta-2014/ But that's not all that's going on. In the RDO community we've been following the CentOS Cloud SIG effort closely. We see a lot of people deploying production OpenStack clouds with RDO on CentOS and Scientific Linux, and so we are excited to see the CentOS developer community working towards a strong Cloud ecosystem. CentOS will be holding a dojo (community event) at the CloudStack Collaboration Conference (CCC) in Denver, Colorado, on April 10th. This event will include discussion of cloud deployments on CentOS. If you're attending either CCC or ApacheCon (http://na.apachecon.com/ ) consider staying for the CentOS dojo. See the schedule of events at http://cloudstackcollabconference2014.sched.org/overview/type/centos+dojo#.UzHcsnWx2Kt and register at https://www.regonline.com/Register/Checkin.aspx?EventID=1499698 The following week, April 14-17, the Red Hat Summit will be held in San Francisco. In addition to content about Red Hat Enterprise Linux and other Red Hat products, there will be a substantial number of OpenStack and RDO talks, which we've listed in the blog post at http://openstack.redhat.com/forum/discussion/970/red-hat-summit-april -14th-17th-san-francisco OpenStack is currently in Feature Freeze. Thierry has a great explanation at http://fnords.wordpress.com/2014/03/06/why-we-do-feature-freeze/ of why we do this. And on April 17th, we expect OpenStack Icehouse to be released. Then, two weeks later, we'll be in Atlanta for the OpenStack Summit. It's the best place to learn about OpenStack, from installation to monitoring, from community to code. And RDO will be there. We'll have a booth where we'll be doing demos of various aspects of OpenStack using RDO, and we'll have engineers there presenting a wide range of presentations and hands-on labs - you can find those listed at http://community.redhat.com/events/#openstacksummitus If you're at Summit, please drop by the RDO booth to tell us your OpenStack stories. For those of you on the opposite side of the planet, LinuxCon Tokyo will be held May 20-22, and RDO will have a booth presence there, too, with several people from the RDO community in attendance. You can see more information about that event at http://events.linuxfoundation.org/events/linuxcon-japan although as of this writing the schedule has not yet been published. Stay in touch There's lots of ways to stay in touch with what's going on at RDO. For question-and-answer, we rely on http://ask.openstack.org/ Just be sure to tag your question as #rdo if you want the attention of the RDO engineers. ask.openstack.org gives you access to the best minds in the OpenStack community, as well as an archive of previous questions and answers that you can search. If you're more of a mailing list fan, the rdo-list mailing list is the place to be. It's fairly low volume - 3-5 messages a day - and a great place to get personal attention to your questions. And if you just want to get periodic updates on what's going on, we're on Twitter at @RDOCommunity. And don't forget to look at http://openstack.redhat.com/Events to see where RDO people will be speaking around the world. -- Rich Bowen, for the RDO Community Follow us on Twitter at @RDOCommunity Manage your newsletter subscription at http://www.redhat.com/mailman/listinfo/rdo-newsletter See also the rdo-list mailing list at http://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From ak at cloudssky.com Sun Apr 6 20:05:20 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sun, 6 Apr 2014 22:05:20 +0200 Subject: [Rdo-list] CoreOS Cluster on OpenStack Message-ID: Hi, I posted this question on ask.openstack.org: https://ask.openstack.org/en/question/26649/coreos-cluster-on-openstack/ And would like to know if someone has any experience about this. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue Apr 8 05:57:07 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 8 Apr 2014 11:27:07 +0530 Subject: [Rdo-list] CoreOS Cluster on OpenStack In-Reply-To: References: Message-ID: <20140408055707.GB1996@tesla> On Sun, Apr 06, 2014 at 10:05:20PM +0200, Arash Kaffamanesh wrote: > Hi, > > I posted this question on ask.openstack.org: > > https://ask.openstack.org/en/question/26649/coreos-cluster-on-openstack/ > > And would like to know if someone has any experience about this. Seems like it's answered there with some URLs. AIUI, CoreOS is essentially Kernel+systemd+etcd (for clustering). There's a similar project in RPM-land the Fedora Atomic Initiative[1] [1] http://rpm-ostree.cloud.fedoraproject.org/#/ -- /kashyap From kchamart at redhat.com Tue Apr 8 06:27:10 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 8 Apr 2014 11:57:10 +0530 Subject: [Rdo-list] openstack.redhat.com down? Message-ID: <20140408062710.GC1996@tesla> Heya, Seems like openstack.redhat.com is down. I see 100% packet loss: $ ping -c4 openstack.redhat.com PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. --- openstack.redhat.com ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 2999ms Anyone else see this too? -- /kashyap From david at zeromail.us Tue Apr 8 06:31:09 2014 From: david at zeromail.us (David S.) Date: Tue, 8 Apr 2014 13:31:09 +0700 Subject: [Rdo-list] openstack.redhat.com down? In-Reply-To: <20140408062710.GC1996@tesla> References: <20140408062710.GC1996@tesla> Message-ID: Hi, Im able to access rdo page without problem. Try to run traceroute from your host. On Apr 8, 2014 1:27 PM, "Kashyap Chamarthy" wrote: > Heya, > > Seems like openstack.redhat.com is down. I see 100% packet loss: > > $ ping -c4 openstack.redhat.com > PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. > > --- openstack.redhat.com ping statistics --- > 4 packets transmitted, 0 received, 100% packet loss, time 2999ms > > > Anyone else see this too? > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Tue Apr 8 10:17:20 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 08 Apr 2014 11:17:20 +0100 Subject: [Rdo-list] openstack.redhat.com down? In-Reply-To: <20140408062710.GC1996@tesla> References: <20140408062710.GC1996@tesla> Message-ID: <5343CCB0.3030506@redhat.com> On 04/08/2014 07:27 AM, Kashyap Chamarthy wrote: > Heya, > > Seems like openstack.redhat.com is down. I see 100% packet loss: > > $ ping -c4 openstack.redhat.com > PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. > > --- openstack.redhat.com ping statistics --- > 4 packets transmitted, 0 received, 100% packet loss, time 2999ms > > > Anyone else see this too? > Back now. It was being updated with security fixes. From kchamart at redhat.com Tue Apr 8 13:12:25 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 8 Apr 2014 09:12:25 -0400 (EDT) Subject: [Rdo-list] openstack.redhat.com down? In-Reply-To: <5343CCB0.3030506@redhat.com> References: <20140408062710.GC1996@tesla> <5343CCB0.3030506@redhat.com> Message-ID: <1656873357.1179398.1396962745990.JavaMail.zimbra@redhat.com> > On 04/08/2014 07:27 AM, Kashyap Chamarthy wrote: > > Heya, > > > > Seems like openstack.redhat.com is down. I see 100% packet loss: > > > > $ ping -c4 openstack.redhat.com > > PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. > > > > --- openstack.redhat.com ping statistics --- > > 4 packets transmitted, 0 received, 100% packet loss, time 2999ms > > > > > > Anyone else see this too? > > > > Back now. Hmm, seems like it's still not accessible to me: $ ping -c4 openstack.redhat.com PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. --- openstack.redhat.com ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 2999ms $ date Tue Apr 8 18:37:55 IST 2014 > It was being updated with security fixes. I presume the recent OpenSSL security fixes. /kashyap From jeckersb at redhat.com Tue Apr 8 21:42:08 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Tue, 08 Apr 2014 17:42:08 -0400 Subject: [Rdo-list] Puppet glusterers unite Message-ID: <87fvlnebwv.fsf@redhat.com> Greetings, For those of you in the To: line, I believe you are all doing something with gluster and puppet at the moment. For anyone else on rdo-list that might be interested, jump in :) Primarily I want to get everyone talking to make sure we don't step on each other's toes. I know James has done some great work with the puppet-gluster module, and Gilles is currently working to switch off of the now-deprecated puppet-openstack-storage module and onto puppet-gluster. Crag, Ji??, and myself are working gluster-related bugs. So let's keep in touch. I'm working to configure libgfapi support on nova compute nodes. In the old gluster module, there was a gluster::client class that just installed the required glusterfs-fuse package. This class is used by astapor in a few places (compute/cinder/glance). However there's no gluster::client class in the new module, so we'll need to remedy that somehow. There is a class, gluster::mount::base, that ensures the packages are installed, and that class is used by each instance of gluster::mount. I'd like to reuse some of this, but I don't think we need all of it on the compute nodes (really we just need to install glusterfs-api). The simple way would be to create a new class glusterfs::apiclient that just installs the package, and include that for the nova compute case. However I'm concerned with the other places we were previously using gluster::client. Can we use the new gluster::mount define to replace all of these instances? Or are we going to need to refactor in those places as well? I'd like to have some idea where this is all going before I start ripping it apart. Thoughts? -John From rbowen at redhat.com Tue Apr 8 22:10:32 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 08 Apr 2014 16:10:32 -0600 Subject: [Rdo-list] openstack.redhat.com down? In-Reply-To: <1656873357.1179398.1396962745990.JavaMail.zimbra@redhat.com> References: <20140408062710.GC1996@tesla> <5343CCB0.3030506@redhat.com> <1656873357.1179398.1396962745990.JavaMail.zimbra@redhat.com> Message-ID: <534473D8.3090006@redhat.com> On 04/08/2014 07:12 AM, Kashyap Chamarthy wrote: >> On 04/08/2014 07:27 AM, Kashyap Chamarthy wrote: >>> Heya, >>> >>> Seems like openstack.redhat.com is down. I see 100% packet loss: >>> >>> $ ping -c4 openstack.redhat.com >>> PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. >>> >>> --- openstack.redhat.com ping statistics --- >>> 4 packets transmitted, 0 received, 100% packet loss, time 2999ms >>> >>> >>> Anyone else see this too? >>> >> Back now. > Hmm, seems like it's still not accessible to me: > > $ ping -c4 openstack.redhat.com > PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. > > --- openstack.redhat.com ping statistics --- > 4 packets transmitted, 0 received, 100% packet loss, time 2999ms > > $ date > Tue Apr 8 18:37:55 IST 2014 > >> It was being updated with security fixes. > I presume the recent OpenSSL security fixes. Are you still seeing it as down? It's fine for me. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From jshubin at redhat.com Tue Apr 8 22:14:39 2014 From: jshubin at redhat.com (James Shubin) Date: Tue, 08 Apr 2014 18:14:39 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <87fvlnebwv.fsf@redhat.com> References: <87fvlnebwv.fsf@redhat.com> Message-ID: <1396995279.4190.57.camel@freed> On Tue, 2014-04-08 at 17:42 -0400, John Eckersberg wrote: > Greetings, > > For those of you in the To: line, I believe you are all doing something > with gluster and puppet at the moment. For anyone else on rdo-list that > might be interested, jump in :) I'm not on rdo-list, so be sure to cc me if you want me to see the messages :) > > Primarily I want to get everyone talking to make sure we don't step on > each other's toes. I know James has done some great work with the > puppet-gluster module, Thanks! For reference, the upstream project is: https://github.com/purpleidea/puppet-gluster I've got a lot of articles and background here: https://ttboj.wordpress.com/ > and Gilles is currently working to switch off of > the now-deprecated puppet-openstack-storage module and onto > puppet-gluster. Crag, Ji??, and myself are working gluster-related > bugs. So let's keep in touch. > > I'm working to configure libgfapi support on nova compute nodes. In the > old gluster module, there was a gluster::client class that just > installed the required glusterfs-fuse package. This class is used by > astapor in a few places (compute/cinder/glance). However there's no > gluster::client class in the new module, so we'll need to remedy that > somehow. I think you've got the right idea in the next paragraph. I'll add some more info... > > There is a class, gluster::mount::base, that ensures the packages are > installed, and that class is used by each instance of gluster::mount. > I'd like to reuse some of this, but I don't think we need all of it on > the compute nodes (really we just need to install glusterfs-api). The > simple way would be to create a new class glusterfs::apiclient that just > installs the package, and include that for the nova compute case. Can you detail what functionality is missing and needed? Please make sure to state if this is relevant to upstream (GlusterFS) or just downstream (RHS/RHEL OSP, etc...) Is it just to install one package or is there anything else? > However I'm concerned with the other places we were previously using > gluster::client. Can we use the new gluster::mount define to replace > all of these instances? Or are we going to need to refactor in those > places as well? I'd like to have some idea where this is all going > before I start ripping it apart. Okay here's the info you're probably looking for: gluster::mount is the thing you probably want. This is probably equivalent to what you might call gluster::client (although I don't know what the old gluster::client does.) It pulls in the gluster::mount::base (which you mentioned above) which has the dependencies. gluster::mount is a type (see the DOCUMENTATION) file. If this is missing any features found in your gluster::client let me know please! As for gluster::client (where is that anyways?) gluster::client doesn't officially exist upstream yet. It's something that I was mid-hack on when RedHat hired me. Basically it does "advanced client mounting magic". I doubt you need this for anything yet, but it will be a cool feature when it comes out. The reason it's "advanced" is because puppet-gluster (besides being a fully working, awesome way to do glusterfs) is also a bit of a research module for me that I use to demonstrate some new and advanced puppet concepts. > > Thoughts? If you have any feature requests, bugs, or complaints, please let me know! One HUGE caveat: RedHat doesn't currently seem to have a build of PuppetDB. See: https://bugzilla.redhat.com/show_bug.cgi?id=1068867 MANY fancy current and future features of puppet-gluster (and many other puppet modules in the world) need some way to do "exported resources". If you have any resources or magic spells to help solve this problem, please help out! Basically the problem is dependency hell to get everything needed to build puppetdb into Fedora. (This is a repeat of previously discussed information for some people in the cc. Any talk about this please put it into BZ and not into an email, so we keep track of all comments.) > > -John > HTH, James From gareth at openstacker.org Wed Apr 9 05:53:12 2014 From: gareth at openstacker.org (Kun Huang) Date: Wed, 9 Apr 2014 13:53:12 +0800 Subject: [Rdo-list] RDO on multiple nodes Message-ID: Hi, Could RDO deploy OpenStack on multiple nodes now? If not, what's the current precess? Thanks Gareth -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Wed Apr 9 05:58:21 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 09 Apr 2014 07:58:21 +0200 Subject: [Rdo-list] RDO on multiple nodes In-Reply-To: References: Message-ID: <5344E17D.3080300@redhat.com> On 04/09/2014 07:53 AM, Kun Huang wrote: > Hi, > > Could RDO deploy OpenStack on multiple nodes now? If not, what's the > current precess? > Hey, yes, that's possible. You'll find docs about this on [1]. Best, Matthias [1] http://openstack.redhat.com/Install From cwolfe at redhat.com Wed Apr 9 07:29:51 2014 From: cwolfe at redhat.com (Crag Wolfe) Date: Wed, 09 Apr 2014 00:29:51 -0700 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <87fvlnebwv.fsf@redhat.com> References: <87fvlnebwv.fsf@redhat.com> Message-ID: <5344F6EF.90200@redhat.com> On 04/08/2014 02:42 PM, John Eckersberg wrote: > Greetings, > > For those of you in the To: line, I believe you are all doing something > with gluster and puppet at the moment. For anyone else on rdo-list that > might be interested, jump in :) > > Primarily I want to get everyone talking to make sure we don't step on > each other's toes. I know James has done some great work with the > puppet-gluster module, and Gilles is currently working to switch off of > the now-deprecated puppet-openstack-storage module and onto > puppet-gluster. Crag, Ji??, and myself are working gluster-related > bugs. So let's keep in touch. > > I'm working to configure libgfapi support on nova compute nodes. In the > old gluster module, there was a gluster::client class that just > installed the required glusterfs-fuse package. This class is used by > astapor in a few places (compute/cinder/glance). However there's no > gluster::client class in the new module, so we'll need to remedy that > somehow. > > There is a class, gluster::mount::base, that ensures the packages are > installed, and that class is used by each instance of gluster::mount. > I'd like to reuse some of this, but I don't think we need all of it on > the compute nodes (really we just need to install glusterfs-api). The > simple way would be to create a new class glusterfs::apiclient that just > installs the package, and include that for the nova compute case. > However I'm concerned with the other places we were previously using > gluster::client. Can we use the new gluster::mount define to replace > all of these instances? Or are we going to need to refactor in those > places as well? I'd like to have some idea where this is all going > before I start ripping it apart. > > Thoughts? > > -John > [Also CC'ing Steve and Jacob who worked a bit with gluster / foreman in recent history] In the context of the HA-all-in-one-controller host group, I believe we just would need to include the gluster::mount::base class so that we are capable of mounting glusterfs volumes. Pacemaker would be responsible for mounting the shared storage, and would do that the way Steve illustrated here: https://bugzilla.redhat.com/show_bug.cgi?id=1064050#c4 Not that that helps clarify any of your above questions. :-) --Crag From jistr at redhat.com Wed Apr 9 08:54:38 2014 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 09 Apr 2014 10:54:38 +0200 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <5344F6EF.90200@redhat.com> References: <87fvlnebwv.fsf@redhat.com> <5344F6EF.90200@redhat.com> Message-ID: <53450ACE.8000808@redhat.com> On 9.4.2014 09:29, Crag Wolfe wrote: > On 04/08/2014 02:42 PM, John Eckersberg wrote: >> Greetings, >> >> For those of you in the To: line, I believe you are all doing something >> with gluster and puppet at the moment. For anyone else on rdo-list that >> might be interested, jump in :) >> >> Primarily I want to get everyone talking to make sure we don't step on >> each other's toes. I know James has done some great work with the >> puppet-gluster module, and Gilles is currently working to switch off of >> the now-deprecated puppet-openstack-storage module and onto >> puppet-gluster. Crag, Ji??, and myself are working gluster-related >> bugs. So let's keep in touch. >> >> I'm working to configure libgfapi support on nova compute nodes. In the >> old gluster module, there was a gluster::client class that just >> installed the required glusterfs-fuse package. This class is used by >> astapor in a few places (compute/cinder/glance). However there's no >> gluster::client class in the new module, so we'll need to remedy that >> somehow. >> >> There is a class, gluster::mount::base, that ensures the packages are >> installed, and that class is used by each instance of gluster::mount. >> I'd like to reuse some of this, but I don't think we need all of it on >> the compute nodes (really we just need to install glusterfs-api). The >> simple way would be to create a new class glusterfs::apiclient that just >> installs the package, and include that for the nova compute case. >> However I'm concerned with the other places we were previously using >> gluster::client. Can we use the new gluster::mount define to replace >> all of these instances? Or are we going to need to refactor in those >> places as well? I'd like to have some idea where this is all going >> before I start ripping it apart. >> >> Thoughts? >> >> -John >> > [Also CC'ing Steve and Jacob who worked a bit with gluster / foreman in > recent history] > > In the context of the HA-all-in-one-controller host group, I believe we > just would need to include the gluster::mount::base class so that we are > capable of mounting glusterfs volumes. Pacemaker would be responsible > for mounting the shared storage, and would do that the way Steve > illustrated here: > https://bugzilla.redhat.com/show_bug.cgi?id=1064050#c4 > > Not that that helps clarify any of your above questions. :-) > > --Crag > Seems like we wouldn't use Pacemaker for mounting GlusterFS for use with Cinder. (The BZ above is about Glance, which might behave differently.) The info i was able to dig up suggests that the GlusterFS driver for Cinder expects to do the mounting by itself [1,2]. But i guess we'd still need gluster::mount::base. Jirka [1] http://www.gluster.org/community/documentation/index.php/GlusterFS_Cinder [2] https://github.com/stackforge/puppet-cinder/blob/164163a7a267ae4139e2d97bab1a385a6da2ac5f/manifests/volume/glusterfs.pp#L31-L33 From jeckersb at redhat.com Wed Apr 9 15:45:22 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 09 Apr 2014 11:45:22 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <1396995279.4190.57.camel@freed> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> Message-ID: <87sipmv759.fsf@redhat.com> James Shubin writes: > On Tue, 2014-04-08 at 17:42 -0400, John Eckersberg wrote: >> I'm working to configure libgfapi support on nova compute nodes. In the >> old gluster module, there was a gluster::client class that just >> installed the required glusterfs-fuse package. This class is used by >> astapor in a few places (compute/cinder/glance). However there's no >> gluster::client class in the new module, so we'll need to remedy that >> somehow. > I think you've got the right idea in the next paragraph. I'll add some > more info... > >> >> There is a class, gluster::mount::base, that ensures the packages are >> installed, and that class is used by each instance of gluster::mount. >> I'd like to reuse some of this, but I don't think we need all of it on >> the compute nodes (really we just need to install glusterfs-api). The >> simple way would be to create a new class glusterfs::apiclient that just >> installs the package, and include that for the nova compute case. > Can you detail what functionality is missing and needed? Please make > sure to state if this is relevant to upstream (GlusterFS) or just > downstream (RHS/RHEL OSP, etc...) Is it just to install one package or > is there anything else? Focusing just on this bit for now... Libvirt supports using gluster as a storage backend. It does this by calling libgfapi to interface with gluster directly. There is no fuse layer or filesystem mounts needed. If nova is configured with qemu_allowed_storage_backend=gluster, then it will generate the necessary libvirt xml to make this happen magically at launch time. The only prerequisite is for libgfapi to be available on the compute nodes (and for libvirt to have been build with gluster support enabled). The glusterfs-api package is what provides the required library. So, I just need a class to include which provides installation of the glusterfs-api package. I was going to just add it as a normal package resource on my end, but I'm guessing this is something other folks might want to take advantage of, and the puppet-gluster module seems like the correct place to put it. It's not particular to the downstream case; anyone who might want to use libgfapi as a client would want a canonical way to set it up. I'll submit a pull request with a new glusterfs::apiclient class that does nothing else but install the glusterfs-api package, and we can work from there. Let me know if you'd prefer another name. Maybe something like gluster::client::api, if you anticipate having other stuff under a gluster::client namespace? John From jshubin at redhat.com Wed Apr 9 16:17:01 2014 From: jshubin at redhat.com (James Shubin) Date: Wed, 09 Apr 2014 12:17:01 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <87sipmv759.fsf@redhat.com> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> Message-ID: <1397060221.4190.104.camel@freed> On Wed, 2014-04-09 at 11:45 -0400, John Eckersberg wrote: > James Shubin writes: > > On Tue, 2014-04-08 at 17:42 -0400, John Eckersberg wrote: > >> I'm working to configure libgfapi support on nova compute nodes. In the > >> old gluster module, there was a gluster::client class that just > >> installed the required glusterfs-fuse package. This class is used by > >> astapor in a few places (compute/cinder/glance). However there's no > >> gluster::client class in the new module, so we'll need to remedy that > >> somehow. > > I think you've got the right idea in the next paragraph. I'll add some > > more info... > > > >> > >> There is a class, gluster::mount::base, that ensures the packages are > >> installed, and that class is used by each instance of gluster::mount. > >> I'd like to reuse some of this, but I don't think we need all of it on > >> the compute nodes (really we just need to install glusterfs-api). The > >> simple way would be to create a new class glusterfs::apiclient that just > >> installs the package, and include that for the nova compute case. > > Can you detail what functionality is missing and needed? Please make > > sure to state if this is relevant to upstream (GlusterFS) or just > > downstream (RHS/RHEL OSP, etc...) Is it just to install one package or > > is there anything else? > > Focusing just on this bit for now... > > Libvirt supports using gluster as a storage backend. It does this by > calling libgfapi to interface with gluster directly. There is no fuse > layer or filesystem mounts needed. > > If nova is configured with qemu_allowed_storage_backend=gluster, then it > will generate the necessary libvirt xml to make this happen magically at > launch time. The only prerequisite is for libgfapi to be available on > the compute nodes (and for libvirt to have been build with gluster > support enabled). The glusterfs-api package is what provides the > required library. Cool. > > So, I just need a class to include which provides installation of the > glusterfs-api package. I was going to just add it as a normal package > resource on my end, but I'm guessing this is something other folks might > want to take advantage of, and the puppet-gluster module seems like the > correct place to put it. It's not particular to the downstream case; > anyone who might want to use libgfapi as a client would want a canonical > way to set it up. Sounds like a good thing to add. > > I'll submit a pull request with a new glusterfs::apiclient class that > does nothing else but install the glusterfs-api package, and we can work > from there. Let me know if you'd prefer another name. Maybe something > like gluster::client::api, if you anticipate having other stuff under a > gluster::client namespace? Sounds good, except it's a bit tricky because of the versioning, so I hacked it together for you: https://github.com/purpleidea/puppet-gluster/tree/feat/libgfapi To involve you more, I didn't even test it :) Please test and let me know. If it meets your needs, I'll merge it into master. HTH James > > John From jeckersb at redhat.com Wed Apr 9 19:24:16 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 09 Apr 2014 15:24:16 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <1397060221.4190.104.camel@freed> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> <1397060221.4190.104.camel@freed> Message-ID: <87ppkqux0f.fsf@redhat.com> James Shubin writes: > On Wed, 2014-04-09 at 11:45 -0400, John Eckersberg wrote: >> I'll submit a pull request with a new glusterfs::apiclient class that >> does nothing else but install the glusterfs-api package, and we can work >> from there. Let me know if you'd prefer another name. Maybe something >> like gluster::client::api, if you anticipate having other stuff under a >> gluster::client namespace? > Sounds good, except it's a bit tricky because of the versioning, so I > hacked it together for you: > > https://github.com/purpleidea/puppet-gluster/tree/feat/libgfapi > > To involve you more, I didn't even test it :) Please test and let me > know. If it meets your needs, I'll merge it into master. Works for me! I didn't test the server bit, but it looks sane by visual inspection. Thanks for knocking it out so quickly. John From jshubin at redhat.com Wed Apr 9 20:38:18 2014 From: jshubin at redhat.com (James Shubin) Date: Wed, 09 Apr 2014 16:38:18 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <87ppkqux0f.fsf@redhat.com> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> <1397060221.4190.104.camel@freed> <87ppkqux0f.fsf@redhat.com> Message-ID: <1397075898.4190.156.camel@freed> On Wed, 2014-04-09 at 15:24 -0400, John Eckersberg wrote: > > https://github.com/purpleidea/puppet-gluster/tree/feat/libgfapi > > > > To involve you more, I didn't even test it :) Please test and let me > > know. If it meets your needs, I'll merge it into master. > > Works for me! I didn't test the server bit, but it looks sane by > visual Can you confirm you tested the mount part and it pulled in the right package for you? I really didn't test this -though I'd bet a $beverage that it works. :) > inspection. Thanks for knocking it out so quickly. My pleasure. Sorry I had to NACK your patch, but this type of thing really needed tie-in to the versioning stuff. This way if you request gluster v3.3.1 you'll actually get the right libgfapi package to go with it. > > John From jeckersb at redhat.com Wed Apr 9 21:04:43 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 09 Apr 2014 17:04:43 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <1397075898.4190.156.camel@freed> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> <1397060221.4190.104.camel@freed> <87ppkqux0f.fsf@redhat.com> <1397075898.4190.156.camel@freed> Message-ID: <87mwfuusd0.fsf@redhat.com> James Shubin writes: > On Wed, 2014-04-09 at 15:24 -0400, John Eckersberg wrote: >> > https://github.com/purpleidea/puppet-gluster/tree/feat/libgfapi >> > >> > To involve you more, I didn't even test it :) Please test and let me >> > know. If it meets your needs, I'll merge it into master. >> >> Works for me! I didn't test the server bit, but it looks sane by >> visual > Can you confirm you tested the mount part and it pulled in the right > package for you? > > I really didn't test this -though I'd bet a $beverage that it works. :) Good point, I only checked by declaring gluster::api. When I declare gluster::mount::base instead, I get: Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid tag "::gluster::api" at /usr/share/packstack/modules/gluster/manifests/mount/base.pp:55 on node compute.example.org Changing the second parameter of ensure_resource from '::gluster::api' to 'gluster::api' seems to make it work, for better or worse. From jshubin at redhat.com Wed Apr 9 21:07:21 2014 From: jshubin at redhat.com (James Shubin) Date: Wed, 09 Apr 2014 17:07:21 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <87mwfuusd0.fsf@redhat.com> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> <1397060221.4190.104.camel@freed> <87ppkqux0f.fsf@redhat.com> <1397075898.4190.156.camel@freed> <87mwfuusd0.fsf@redhat.com> Message-ID: <1397077641.4190.165.camel@freed> On Wed, 2014-04-09 at 17:04 -0400, John Eckersberg wrote: > Good point, I only checked by declaring gluster::api. When I declare > gluster::mount::base instead, I get: > > Error: Could not retrieve catalog from remote server: Error 400 on > SERVER: Invalid tag "::gluster::api" > at /usr/share/packstack/modules/gluster/manifests/mount/base.pp:55 on > node compute.example.org > > Changing the second parameter of ensure_resource from '::gluster::api' > to 'gluster::api' seems to make it work, for better or worse. Sounds like you're using Puppet < 3.x Can you confirm? I think the leading :: is recommended for 3.x, but I should test if it's required. From jeckersb at redhat.com Wed Apr 9 21:12:17 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Wed, 09 Apr 2014 17:12:17 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <1397077641.4190.165.camel@freed> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> <1397060221.4190.104.camel@freed> <87ppkqux0f.fsf@redhat.com> <1397075898.4190.156.camel@freed> <87mwfuusd0.fsf@redhat.com> <1397077641.4190.165.camel@freed> Message-ID: <87k3ayus0e.fsf@redhat.com> James Shubin writes: > On Wed, 2014-04-09 at 17:04 -0400, John Eckersberg wrote: >> Good point, I only checked by declaring gluster::api. When I declare >> gluster::mount::base instead, I get: >> >> Error: Could not retrieve catalog from remote server: Error 400 on >> SERVER: Invalid tag "::gluster::api" >> at /usr/share/packstack/modules/gluster/manifests/mount/base.pp:55 on >> node compute.example.org >> >> Changing the second parameter of ensure_resource from '::gluster::api' >> to 'gluster::api' seems to make it work, for better or worse. > > Sounds like you're using Puppet < 3.x Can you confirm? > I think the leading :: is recommended for 3.x, but I should test if it's > required. This is with: puppet-3.2.4-3.el6_5.noarch puppet-server-3.2.4-3.el6_5.noarch From kchamart at redhat.com Thu Apr 10 04:40:41 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 10 Apr 2014 00:40:41 -0400 (EDT) Subject: [Rdo-list] openstack.redhat.com down? In-Reply-To: <534473D8.3090006@redhat.com> References: <20140408062710.GC1996@tesla> <5343CCB0.3030506@redhat.com> <1656873357.1179398.1396962745990.JavaMail.zimbra@redhat.com> <534473D8.3090006@redhat.com> Message-ID: <703600837.2278830.1397104841329.JavaMail.zimbra@redhat.com> > On 04/08/2014 07:12 AM, Kashyap Chamarthy wrote: > >> On 04/08/2014 07:27 AM, Kashyap Chamarthy wrote: > >>> Heya, > >>> > >>> Seems like openstack.redhat.com is down. I see 100% packet loss: > >>> > >>> $ ping -c4 openstack.redhat.com > >>> PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. > >>> > >>> --- openstack.redhat.com ping statistics --- > >>> 4 packets transmitted, 0 received, 100% packet loss, time 2999ms > >>> > >>> > >>> Anyone else see this too? > >>> > >> Back now. > > Hmm, seems like it's still not accessible to me: > > > > $ ping -c4 openstack.redhat.com > > PING openstack.redhat.com (54.225.229.160) 56(84) bytes of data. > > > > --- openstack.redhat.com ping statistics --- > > 4 packets transmitted, 0 received, 100% packet loss, time 2999ms > > > > $ date > > Tue Apr 8 18:37:55 IST 2014 > > > >> It was being updated with security fixes. > > I presume the recent OpenSSL security fixes. > > Are you still seeing it as down? It's fine for me. Yep - it's back now, thanks for checking. /kashyap From rbowen at redhat.com Thu Apr 10 14:19:13 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 10 Apr 2014 08:19:13 -0600 Subject: [Rdo-list] Reminder: OpenStack User Survey Message-ID: <5346A861.9070406@redhat.com> A final reminder - the OpenStack User Survey ends TOMORROW. If you haven't yet filled it out, please do so now. https://www.openstack.org/user-survey/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From purpleidea at redhat.com Sat Apr 12 05:07:21 2014 From: purpleidea at redhat.com (James Shubin) Date: Sat, 12 Apr 2014 01:07:21 -0400 Subject: [Rdo-list] Puppet glusterers unite In-Reply-To: <87k3ayus0e.fsf@redhat.com> References: <87fvlnebwv.fsf@redhat.com> <1396995279.4190.57.camel@freed> <87sipmv759.fsf@redhat.com> <1397060221.4190.104.camel@freed> <87ppkqux0f.fsf@redhat.com> <1397075898.4190.156.camel@freed> <87mwfuusd0.fsf@redhat.com> <1397077641.4190.165.camel@freed> <87k3ayus0e.fsf@redhat.com> Message-ID: <1397279241.8110.31.camel@freed> On Wed, 2014-04-09 at 17:12 -0400, John Eckersberg wrote: > James Shubin writes: > > On Wed, 2014-04-09 at 17:04 -0400, John Eckersberg wrote: > >> Good point, I only checked by declaring gluster::api. When I declare > >> gluster::mount::base instead, I get: > >> > >> Error: Could not retrieve catalog from remote server: Error 400 on > >> SERVER: Invalid tag "::gluster::api" > >> at /usr/share/packstack/modules/gluster/manifests/mount/base.pp:55 on > >> node compute.example.org > >> > >> Changing the second parameter of ensure_resource from '::gluster::api' > >> to 'gluster::api' seems to make it work, for better or worse. > > > > Sounds like you're using Puppet < 3.x Can you confirm? > > I think the leading :: is recommended for 3.x, but I should test if it's > > required. > > This is with: > > puppet-3.2.4-3.el6_5.noarch > puppet-server-3.2.4-3.el6_5.noarch Just clearing out some old messages, as I might have forgotten to mention in earlier, but puppet-gluster git master is now patched to remove all leading colons. HTH, James -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From dev29aug at gmail.com Sun Apr 13 11:55:07 2014 From: dev29aug at gmail.com (Devendra Gupta) Date: Sun, 13 Apr 2014 17:25:07 +0530 Subject: [Rdo-list] Glance Image list not working after Keystone SSL setup Message-ID: Hi, I have configured keystone to SSL and also update the endpoint in service catalog. Keystone operations like endpoint/tenant list working fine. I also update glance-api.conf and glance-registry.conf files with ssl enabled keystone details but still glance is unable to find images. It fails with following: [root at openstack-centos65 glance(keystone_admin)]# glance --insecure image-list Request returned failure status. Invalid OpenStack Identity credentials. Please see attached keystone.conf, glance-api.conf and glance-registry.conf and debug output of glance image-list and endpoint list. Regards, Devendra -------------- next part -------------- +----------------------------------+-----------+---------------------------------------------------------+---------------------------------------------------------+--------------------------------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+---------------------------------------------------------+---------------------------------------------------------+--------------------------------------------------------+----------------------------------+ | 2ba1fc5b5fa040cba1fa99f3a0f16b31 | RegionOne | http://openstack-centos65:8773/services/Cloud | http://openstack-centos65:8773/services/Cloud | http://openstack-centos65:8773/services/Admin | 07acc02f8da44aabb6d74f8bfeb73110 | | 34c308699eed49498dbb572624a89d78 | RegionOne | http://openstack-centos65:8776/v1/%(tenant_id)s | http://openstack-centos65:8776/v1/%(tenant_id)s | http://openstack-centos65:8776/v1/%(tenant_id)s | 58ddbeb7bfa241b8abf5d4b00fa60796 | | 35383c51b83146cd9d5920e3b598812c | RegionOne | http://openstack-centos65:8774/v2/%(tenant_id)s | http://openstack-centos65:8774/v2/%(tenant_id)s | http://openstack-centos65:8774/v2/%(tenant_id)s | 5e3c505c2b684b80afe1d1f62963f48b | | 4364a747f16549bb90f0820288ca62ea | RegionOne | http://openstack-centos65:8776/v2/%(tenant_id)s | http://openstack-centos65:8776/v2/%(tenant_id)s | http://openstack-centos65:8776/v2/%(tenant_id)s | ca4e340f3ac84871b47a7bf32f88ec47 | | 7beb09e4b38a4a0cb115e2b28cff20d7 | RegionOne | http://openstack-centos65:9292 | http://openstack-centos65:9292 | http://openstack-centos65:9292 | 8046b0a30eb5478b82d9f34560ab2848 | | 8b3680803d034ccc9bd8994c214e5652 | RegionOne | http://openstack-centos65:8777 | http://openstack-centos65:8777 | http://openstack-centos65:8777 | 2861404c9ff4467cadf617f3fa281256 | | b0013f4bf78b4c31a078c48edc847025 | RegionOne | http://openstack-centos65:8080/v1/AUTH_%(tenant_id)s | http://openstack-centos65:8080/v1/AUTH_%(tenant_id)s | http://openstack-centos65:8080/ | 4f9d0f3af6e64e1d9f7d6e18cc9d843c | | c17ff619f0dd49eda15704dd137dce57 | RegionOne | http://openstack-centos65:9696/ | http://openstack-centos65:9696/ | http://openstack-centos65:9696/ | 0a6a913e64364ea0888380d4011dace7 | | d15b58c95b344d01bbaa4537618571f2 | RegionOne | https://openstack-centos65:$(public_port)s/v2.0 | https://openstack-centos65:$(public_port)s/v2.0 | https://openstack-centos65:$(admin_port)s/v2.0 | 9ab7d84f23094cb58a1614f2c99b38f2 | | de343660051145b8996459691eabe64e | RegionOne | http://openstack-centos65:8080 | http://openstack-centos65:8080 | http://openstack-centos65:8080 | c71f1a7cb5264938be8e2631622f7168 | +----------------------------------+-----------+---------------------------------------------------------+---------------------------------------------------------+--------------------------------------------------------+----------------------------------+ -------------- next part -------------- [root at openstack-centos65 glance(keystone_admin)]# glance --insecure --debug image-list curl -i -X GET -H 'X-Auth-Token: MIIPwQYJKoZIhvcNAQcCoIIPsjCCD64CAQExCTAHBgUrDgMCGjCCDo0GCSqGSIb3DQEHAaCCDn4Egg56eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNC0xM1QxODo1MTozNi41NjMyNTkiLCAiZXhwaXJlcyI6ICIyMDE0LTA0LTE0VDE4OjUxOjM2WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogImFkbWluIHRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImIzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzQvdjIvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjY3YjJhZWIxY2FiMjQ5Yjg5YzM2ZThkNjhjN2JhYTY3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5Njk2LyIsICJpZCI6ICIwNWE2MWM2NDM2NGM0MmNmODhhYjJkYjJkYmIxY2RlNSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyIsICJpZCI6ICI3Y2FkYzE1ZjlhMGQ0YjYyYWQyMDRiNmIwNTE3ZjMyMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcl92MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIiwgImlkIjogIjExOGJiYWQ3MTlhYjQyYWE4ZTYwZDg0NTMyYWFjZTA2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAic3dpZnRfczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiIsICJpZCI6ICI0MTUzZDZlOGQ0NDE0MzM5YjU2Njk3MGRkN2U2YzI0YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzciLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3IiwgImlkIjogIjRlNGJjZGJmNzZiOTQ1ZWY4MGRmMjIzMmZjZGFmNzFlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzYvdjEvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjRhZGQzMWIzM2M2MzRhMWU4ZDIwYTA3MzhjNmQ1ZjExIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIiwgImlkIjogIjc3YWE0NWM5MWY3MDQzYmNhOWFiYWE0NmM4MzYzODJjIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImVjMiIsICJuYW1lIjogIm5vdmFfZWMyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODA4MC8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwL3YxL0FVVEhfYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAiaWQiOiAiMjc5NWNjMTQzMTFjNGZmZThhZjE3OTI1OTY1Nzk1ZDYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAvdjEvQVVUSF9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJvYmplY3Qtc3RvcmUiLCAibmFtZSI6ICJzd2lmdCJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206MzUzNTcvdjIuMCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206NTAwMC92Mi4wIiwgImlkIjogIjU0YmMzYjRjNzM2ODQ3NDk4M2IxZTcyYTc0ZDIwYWIzIiwgInB1YmxpY1VSTCI6ICJodHRwczovL29wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjVlZGYyNWY1YTI5YTQ3NGViZTE2ODBiMmFiOGY4MTk1IiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiYzM4ZWE0NDdlMmM4NDFiMmEzZDk4ZjI5NDU5YzQ5NzYiXX19fTGCAQswggEHAgEBMGcwYjELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxIzAhBgNVBAMMGm9wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIGAf62TPWxtBj3BDkliPtWmz-r+Z7TPDKSbT9GUAFO27xwa68WxtgbQbVC8xpNcvOg8gNQhWvgV20L-oDDEUHhxcHCP-qqO8LdD+5YbzOwn8rlS0CAaUFoElA-ZDW1EVMpaXWII7YFFm+6VlSMKmVh0rEr7RT70EVHUeoAD+aVwtrA=' -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' http://openstack-centos65:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20 HTTP/1.1 401 Unauthorized date: Sun, 13 Apr 2014 18:51:40 GMT content-length: 253 content-type: text/plain; charset=UTF-8 401 Unauthorized This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required. Request returned failure status. Invalid OpenStack Identity credentials. [root at openstack-centos65 glance(keystone_admin)]# glance --insecure image-list Request returned failure status. Invalid OpenStack Identity credentials. -------------- next part -------------- [DEFAULT] # Show more verbose log output (sets INFO log level output) #verbose=True verbose=True # Show debugging output in logs (sets DEBUG log level output) #debug=False debug=False # Which backend scheme should Glance use by default is not specified # in a request to add a new image to Glance? Known schemes are determined # by the known_stores option below. # Default: 'file' default_store = file # List of which store classes and store class locations are # currently known to glance at startup. #known_stores = glance.store.filesystem.Store, # glance.store.http.Store, # glance.store.rbd.Store, # glance.store.s3.Store, # glance.store.swift.Store, # glance.store.sheepdog.Store, # glance.store.cinder.Store, # Maximum image size (in bytes) that may be uploaded through the # Glance API server. Defaults to 1 TB. # WARNING: this value should only be increased after careful consideration # and must be set to a value under 8 EB (9223372036854775808). #image_size_cap = 1099511627776 # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9292 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! #log_file=/var/log/glance/api.log log_file=/var/log/glance/api.log # Backlog requests when creating socket backlog = 4096 # TCP_KEEPIDLE value in seconds when creating socket. # Not supported on OS X. #tcp_keepidle = 600 # API to use for accessing data. Default value points to sqlalchemy # package, it is also possible to use: glance.db.registry.api # data_api = glance.db.sqlalchemy.api # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine #sql_connection=mysql://glance:glance at localhost/glance sql_connection=mysql://glance:5266553a114e4208 at openstack-centos65/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Number of Glance API worker processes to start. # On machines with more than one CPU increasing this value # may improve performance (especially if using SSL with # compression turned on). It is typically recommended to set # this value to the number of CPUs present on your machine. workers = 4 # Role used to identify an authenticated user as administrator #admin_role = admin # Allow unauthenticated users to access the API with read-only # privileges. This only applies when using ContextMiddleware. #allow_anonymous_access = False # Allow access to version 1 of glance api #enable_v1_api = True # Allow access to version 2 of glance api #enable_v2_api = True # Return the URL that references where the data is stored on # the backend storage system. For example, if using the # file system store a URL of 'file:///path/to/image' will # be returned to the user in the 'direct_url' meta-data field. # The default value is false. #show_image_direct_url = False # Send headers containing user and tenant information when making requests to # the v1 glance registry. This allows the registry to function as if a user is # authenticated without the need to authenticate a user itself using the # auth_token middleware. # The default value is false. #send_identity_headers = False # Supported values for the 'container_format' image attribute #container_formats=ami,ari,aki,bare,ovf # Supported values for the 'disk_format' image attribute #disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso # Directory to use for lock files. Default to a temp directory # (string value). This setting needs to be the same for both # glance-scrubber and glance-api. #lock_path= # # Property Protections config file # This file contains the rules for property protections and the roles # associated with it. # If this config value is not specified, by default, property protections # won't be enforced. # If a value is specified and the file is not found, then an # HTTPInternalServerError will be thrown. #property_protection_file = # Set a system wide quota for every user. This value is the total number # of bytes that a user can use across all storage systems. A value of # 0 means unlimited. #user_storage_quota = 0 # ================= Syslog Options ============================ # Send logs to syslog (/dev/log) instead of to file specified # by `log_file` #use_syslog = False use_syslog = False # Facility to use. If unset defaults to LOG_USER. #syslog_log_facility = LOG_LOCAL0 # ================= SSL Options =============================== # Certificate file to use when starting API server securely #cert_file = /path/to/certfile # Private key file to use when starting API server securely #key_file = /path/to/keyfile # CA certificate file to use to verify connecting clients #ca_file = /path/to/cafile # ================= Security Options ========================== # AES key for encrypting store 'location' metadata, including # -- if used -- Swift or S3 credentials # Should be set to a random string of length 16, 24 or 32 bytes #metadata_encryption_key = <16, 24 or 32 char registry metadata key> # ============ Registry Options =============================== # Address to find the registry server registry_host = 0.0.0.0 # Port the registry server is listening on registry_port = 9191 # What protocol to use when connecting to the registry server? # Set to https for secure HTTP communication registry_client_protocol = http # The path to the key file to use in SSL connections to the # registry server, if any. Alternately, you may set the # GLANCE_CLIENT_KEY_FILE environ variable to a filepath of the key file #registry_client_key_file = /path/to/key/file # The path to the cert file to use in SSL connections to the # registry server, if any. Alternately, you may set the # GLANCE_CLIENT_CERT_FILE environ variable to a filepath of the cert file #registry_client_cert_file = /path/to/cert/file # The path to the certifying authority cert file to use in SSL connections # to the registry server, if any. Alternately, you may set the # GLANCE_CLIENT_CA_FILE environ variable to a filepath of the CA cert file #registry_client_ca_file = /path/to/ca/file # When using SSL in connections to the registry server, do not require # validation via a certifying authority. This is the registry's equivalent of # specifying --insecure on the command line using glanceclient for the API # Default: False #registry_client_insecure = False # The period of time, in seconds, that the API server will wait for a registry # request to complete. A value of '0' implies no timeout. # Default: 600 #registry_client_timeout = 600 # Whether to automatically create the database tables. # Default: False #db_auto_create = False # Enable DEBUG log messages from sqlalchemy which prints every database # query and response. # Default: False #sqlalchemy_debug = True # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue), qpid (via a Qpid # message queue), or noop (no notifications sent, the default) #notifier_strategy=qpid notifier_strategy=qpid # Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = guest rabbit_virtual_host = / rabbit_notification_exchange = glance rabbit_notification_topic = notifications rabbit_durable_queues = False # Configuration options if sending notifications via Qpid (these are # the defaults) qpid_notification_exchange = glance qpid_notification_topic = notifications qpid_hostname = openstack-centos65 qpid_port = 5672 qpid_username =guest qpid_password =guest qpid_sasl_mechanisms = qpid_reconnect_timeout = 0 qpid_reconnect_limit = 0 qpid_reconnect_interval_min = 0 qpid_reconnect_interval_max = 0 qpid_reconnect_interval = 0 #qpid_heartbeat=60 # Set to 'ssl' to enable SSL qpid_protocol = tcp qpid_tcp_nodelay = True # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to #filesystem_store_datadir=/var/lib/glance/images/ filesystem_store_datadir=/var/lib/glance/images/ # A path to a JSON file that contains metadata describing the storage # system. When show_multiple_locations is True the information in this # file will be returned with any location that is contained in this # store. #filesystem_store_metadata_file = None # ============ Swift Store Options ============================= # Version of the authentication service to use # Valid versions are '2' for keystone and '1' for swauth and rackspace swift_store_auth_version = 2 # Address where the Swift authentication service lives # Valid schemes are 'http://' and 'https://' # If no scheme specified, default to 'https://' # For swauth, use something like '127.0.0.1:8080/v1.0/' swift_store_auth_address = 127.0.0.1:5000/v2.0/ # User to authenticate against the Swift authentication service # If you use Swift authentication service, set it to 'account':'user' # where 'account' is a Swift storage account and 'user' # is a user in that account swift_store_user = jdoe:jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!) # # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ swift_enable_snet = False # If set to True enables multi-tenant storage mode which causes Glance images # to be stored in tenant specific Swift accounts. #swift_store_multi_tenant = False # A list of swift ACL strings that will be applied as both read and # write ACLs to the containers created by Glance in multi-tenant # mode. This grants the specified tenants/users read and write access # to all newly created image objects. The standard swift ACL string # formats are allowed, including: # : # : # *: # Multiple ACLs can be combined using a comma separated list, for # example: swift_store_admin_tenants = service:glance,*:admin #swift_store_admin_tenants = # The region of the swift endpoint to be used for single tenant. This setting # is only necessary if the tenant has multiple swift endpoints. #swift_store_region = # If set to False, disables SSL layer compression of https swift requests. # Setting to 'False' may improve performance for images which are already # in a compressed format, eg qcow2. If set to True, enables SSL layer # compression (provided it is supported by the target swift proxy). #swift_store_ssl_compression = True # ============ S3 Store Options ============================= # Address where the S3 authentication service lives # Valid schemes are 'http://' and 'https://' # If no scheme specified, default to 'http://' s3_store_host = 127.0.0.1:8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = glance # Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # When sending images to S3, the data will first be written to a # temporary buffer on disk. By default the platform's temporary directory # will be used. If required, an alternative directory can be specified here. #s3_store_object_buffer_dir = /path/to/dir # When forming a bucket url, boto will either set the bucket name as the # subdomain or as the first token of the path. Amazon's S3 service will # accept it as the subdomain, but Swift's S3 middleware requires it be # in the path. Set this to 'path' or 'subdomain' - defaults to 'subdomain'. #s3_store_bucket_url_format = subdomain # ============ RBD Store Options ============================= # Ceph configuration file path # If using cephx authentication, this file should # include a reference to the right keyring # in a client. section rbd_store_ceph_conf = /etc/ceph/ceph.conf # RADOS user to authenticate as (only applicable if using cephx) rbd_store_user = glance # RADOS pool in which images are stored rbd_store_pool = images # Images will be chunked into objects of this size (in megabytes). # For best performance, this should be a power of two rbd_store_chunk_size = 8 # ============ Sheepdog Store Options ============================= sheepdog_store_address = localhost sheepdog_store_port = 7000 # Images will be chunked into objects of this size (in megabytes). # For best performance, this should be a power of two sheepdog_store_chunk_size = 64 # ============ Cinder Store Options =============================== # Info to match when looking for cinder in the service catalog # Format is : separated values of the form: # :: (string value) #cinder_catalog_info = volume:cinder:publicURL # Override service catalog lookup with template for cinder endpoint # e.g. http://localhost:8776/v1/%(project_id)s (string value) #cinder_endpoint_template = # Region name of this node (string value) #os_region_name = # Location of ca certicates file to use for cinder client requests # (string value) #cinder_ca_certificates_file = # Number of cinderclient retries on failed http calls (integer value) #cinder_http_retries = 3 # Allow to perform insecure SSL requests to cinder (boolean value) #cinder_api_insecure = False # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False # Delayed delete time in seconds scrub_time = 43200 # Directory that the scrubber will use to remind itself of what to delete # Make sure this is also set in glance-scrubber.conf #scrubber_datadir=/var/lib/glance/scrubber # =============== Image Cache Options ============================= # Base directory that the Image Cache uses #image_cache_dir=/var/lib/glance/image-cache/ [keystone_authtoken] #auth_host=127.0.0.1 #auth_host=openstack-centos65 auth_host=openstack-centos65 #auth_port=35357 auth_port=35357 #auth_protocol=http auth_protocol=https #admin_tenant_name=%SERVICE_TENANT_NAME% admin_tenant_name=services #admin_user=%SERVICE_USER% admin_user=glance #admin_password=%SERVICE_PASSWORD% admin_password=9910fdcb05c4439a #auth_uri=http://openstack-centos65:5000/ auth_uri=https://openstack-centos65:5000/ [paste_deploy] # Name of the paste configuration file that defines the available pipelines #config_file=/usr/share/glance/glance-api-dist-paste.ini # Partial name of a pipeline in your paste configuration file with the # service name removed. For example, if your paste section name is # [pipeline:glance-api-keystone], you would configure the flavor below # as 'keystone'. #flavor= flavor=keystone -------------- next part -------------- [DEFAULT] # Show more verbose log output (sets INFO log level output) #verbose=True verbose=True # Show debugging output in logs (sets DEBUG log level output) #debug=False debug=False # Address to bind the registry server bind_host = 0.0.0.0 # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! #log_file=/var/log/glance/registry.log # Backlog requests when creating socket backlog = 4096 # TCP_KEEPIDLE value in seconds when creating socket. # Not supported on OS X. #tcp_keepidle = 600 # API to use for accessing data. Default value points to sqlalchemy # package. # data_api = glance.db.sqlalchemy.api # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine #sql_connection=mysql://glance:glance at localhost/glance sql_connection=mysql://glance:5266553a114e4208 at openstack-centos65/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25 # Role used to identify an authenticated user as administrator #admin_role = admin # Whether to automatically create the database tables. # Default: False #db_auto_create = False # Enable DEBUG log messages from sqlalchemy which prints every database # query and response. # Default: False #sqlalchemy_debug = True # ================= Syslog Options ============================ # Send logs to syslog (/dev/log) instead of to file specified # by `log_file` #use_syslog = False use_syslog = False # Facility to use. If unset defaults to LOG_USER. #syslog_log_facility = LOG_LOCAL1 # ================= SSL Options =============================== # Certificate file to use when starting registry server securely #cert_file = /path/to/certfile # Private key file to use when starting registry server securely #key_file = /path/to/keyfile # CA certificate file to use to verify connecting clients #ca_file = /path/to/cafile [keystone_authtoken] #auth_host=127.0.0.1 auth_host=openstack-centos65 #auth_port=35357 auth_port=35357 #auth_protocol=http auth_protocol=https #admin_tenant_name=%SERVICE_TENANT_NAME% admin_tenant_name=services #admin_user=%SERVICE_USER% admin_user=glance #admin_password=%SERVICE_PASSWORD% admin_password=9910fdcb05c4439a auth_uri=https://openstack-centos65:5000/ [paste_deploy] # Name of the paste configuration file that defines the available pipelines #config_file=/usr/share/glance/glance-registry-dist-paste.ini # Partial name of a pipeline in your paste configuration file with the # service name removed. For example, if your paste section name is # [pipeline:glance-registry-keystone], you would configure the flavor below # as 'keystone'. #flavor= flavor=keystone -------------- next part -------------- f[DEFAULT] # A "shared secret" between keystone and other openstack services # admin_token = ADMIN admin_token = 768b64d7641f49b3b3f98fb0d60dc1bc # The IP address of the network interface to listen on # bind_host = 0.0.0.0 bind_host = 0.0.0.0 # The port number which the public service listens on # public_port = 5000 public_port = 5000 # The port number which the public admin listens on # admin_port = 35357 admin_port = 35357 # The base endpoint URLs for keystone that are advertised to clients # (NOTE: this does NOT affect how keystone listens for connections) # public_endpoint = http://localhost:%(public_port)s/ # admin_endpoint = http://localhost:%(admin_port)s/ # The port number which the OpenStack Compute service listens on # compute_port = 8774 compute_port = 8774 # Path to your policy definition containing identity actions # policy_file = policy.json # Rule to check if no matching policy definition is found # FIXME(dolph): This should really be defined as [policy] default_rule # policy_default_rule = admin_required # Role for migrating membership relationships # During a SQL upgrade, the following values will be used to create a new role # that will replace records in the user_tenant_membership table with explicit # role grants. After migration, the member_role_id will be used in the API # add_user_to_project, and member_role_name will be ignored. # member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab # member_role_name = _member_ # enforced by optional sizelimit middleware (keystone.middleware:RequestBodySizeLimiter) # max_request_body_size = 114688 # limit the sizes of user & tenant ID/names # max_param_size = 64 # similar to max_param_size, but provides an exception for token values # max_token_size = 8192 # === Logging Options === # Print debugging output # (includes plaintext request logging, potentially including passwords) # debug = False debug = False # Print more verbose output # verbose = False verbose = True # Name of log file to output to. If not set, logging will go to stdout. # log_file = /var/log/keystone/keystone.log # The directory to keep log files in (will be prepended to --logfile) # log_dir = /var/log/keystone log_dir = /var/log/keystone # Use syslog for logging. # use_syslog = False use_syslog = False # syslog facility to receive log lines # syslog_log_facility = LOG_USER # If this option is specified, the logging configuration file specified is # used and overrides any other logging options specified. Please see the # Python logging module documentation for details on logging configuration # files. # log_config = logging.conf # A logging.Formatter log message format string which may use any of the # available logging.LogRecord attributes. # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # Format string for %(asctime)s in log records. # log_date_format = %Y-%m-%d %H:%M:%S # onready allows you to send a notification when the process is ready to serve # For example, to have it notify using systemd, one could set shell command: # onready = systemd-notify --ready # or a module with notify() method: # onready = keystone.common.systemd # === Notification Options === # Notifications can be sent when users or projects are created, updated or # deleted. There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and no_op (no notifications # sent, the default) # notification_driver can be defined multiple times # Do nothing driver (the default) # notification_driver = keystone.openstack.common.notifier.no_op_notifier # Logging driver example (not enabled by default) # notification_driver = keystone.openstack.common.notifier.log_notifier # RPC driver example (not enabled by default) # notification_driver = keystone.openstack.common.notifier.rpc_notifier # Default notification level for outgoing notifications # default_notification_level = INFO # Default publisher_id for outgoing notifications; included in the payload. # default_publisher_id = # AMQP topics to publish to when using the RPC notification driver. # Multiple values can be specified by separating with commas. # The actual topic names will be %s.%(default_notification_level)s # notification_topics = notifications # === RPC Options === # For Keystone, these options apply only when the RPC notification driver is # used. # The messaging module to use, defaults to kombu. # rpc_backend = keystone.openstack.common.rpc.impl_kombu # Size of RPC thread pool # rpc_thread_pool_size = 64 # Size of RPC connection pool # rpc_conn_pool_size = 30 # Seconds to wait for a response from call or multicall # rpc_response_timeout = 60 # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. # rpc_cast_timeout = 30 # Modules of exceptions that are permitted to be recreated upon receiving # exception data from an rpc call. # allowed_rpc_exception_modules = keystone.openstack.common.exception,nova.exception,cinder.exception,exceptions # If True, use a fake RabbitMQ provider # fake_rabbit = False # AMQP exchange to connect to if using RabbitMQ or Qpid # control_exchange = openstack [sql] # The SQLAlchemy connection string used to connect to the database # connection = mysql://keystone:keystone at localhost/keystone connection = mysql://keystone_admin:1817719f79c54395 at openstack-centos65/keystone # the timeout before idle sql connections are reaped # idle_timeout = 200 idle_timeout = 200 [identity] # driver = keystone.identity.backends.sql.Identity # This references the domain to use for all Identity API v2 requests (which are # not aware of domains). A domain with this ID will be created for you by # keystone-manage db_sync in migration 008. The domain referenced by this ID # cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. # There is nothing special about this domain, other than the fact that it must # exist to order to maintain support for your v2 clients. # default_domain_id = default # # A subset (or all) of domains can have their own identity driver, each with # their own partial configuration file in a domain configuration directory. # Only values specific to the domain need to be placed in the domain specific # configuration file. This feature is disabled by default; set # domain_specific_drivers_enabled to True to enable. # domain_specific_drivers_enabled = False # domain_config_dir = /etc/keystone/domains # Maximum supported length for user passwords; decrease to improve performance. # max_password_length = 4096 [credential] # driver = keystone.credential.backends.sql.Credential [trust] # driver = keystone.trust.backends.sql.Trust # delegation and impersonation features can be optionally disabled # enabled = True [os_inherit] # role-assignment inheritance to projects from owning domain can be # optionally enabled # enabled = False [catalog] # dynamic, sql-based backend (supports API/CLI-based management commands) # driver = keystone.catalog.backends.sql.Catalog driver = keystone.catalog.backends.sql.Catalog # static, file-based backend (does *NOT* support any management commands) # driver = keystone.catalog.backends.templated.TemplatedCatalog # template_file = /etc/keystone/default_catalog.templates [endpoint_filter] # extension for creating associations between project and endpoints in order to # provide a tailored catalog for project-scoped token requests. # driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter # return_all_endpoints_if_no_filter = True [token] # Provides token persistence. # driver = keystone.token.backends.sql.Token driver = keystone.token.backends.sql.Token # Controls the token construction, validation, and revocation operations. # Core providers are keystone.token.providers.[pki|uuid].Provider # provider = provider =keystone.token.providers.pki.Provider # Amount of time a token should remain valid (in seconds) # expiration = 86400 expiration = 86400 # External auth mechanisms that should add bind information to token. # eg kerberos, x509 # bind = # Enforcement policy on tokens presented to keystone with bind information. # One of disabled, permissive, strict, required or a specifically required bind # mode e.g. kerberos or x509 to require binding to that authentication. # enforce_token_bind = permissive # Token specific caching toggle. This has no effect unless the global caching # option is set to True # caching = True # Token specific cache time-to-live (TTL) in seconds. # cache_time = # Revocation-List specific cache time-to-live (TTL) in seconds. # revocation_cache_time = 3600 [cache] # Global cache functionality toggle. # enabled = False # Prefix for building the configuration dictionary for the cache region. This # should not need to be changed unless there is another dogpile.cache region # with the same configuration name # config_prefix = cache.keystone # Default TTL, in seconds, for any cached item in the dogpile.cache region. # This applies to any cached method that doesn't have an explicit cache # expiration time defined for it. # expiration_time = 600 # Dogpile.cache backend module. It is recommended that Memcache # (dogpile.cache.memcache) or Redis (dogpile.cache.redis) be used in production # deployments. Small workloads (single process) like devstack can use the # dogpile.cache.memory backend. # backend = keystone.common.cache.noop # Arguments supplied to the backend module. Specify this option once per # argument to be passed to the dogpile.cache backend. # Example format: : # backend_argument = # Proxy Classes to import that will affect the way the dogpile.cache backend # functions. See the dogpile.cache documentation on changing-backend-behavior. # Comma delimited list e.g. my.dogpile.proxy.Class, my.dogpile.proxyClass2 # proxies = # Use a key-mangling function (sha1) to ensure fixed length cache-keys. This # is toggle-able for debugging purposes, it is highly recommended to always # leave this set to True. # use_key_mangler = True # Extra debugging from the cache backend (cache keys, get/set/delete/etc calls) # This is only really useful if you need to see the specific cache-backend # get/set/delete calls with the keys/values. Typically this should be left # set to False. # debug_cache_backend = False [policy] # driver = keystone.policy.backends.sql.Policy [ec2] # driver = keystone.contrib.ec2.backends.sql.Ec2 [assignment] # driver = # Assignment specific caching toggle. This has no effect unless the global # caching option is set to True # caching = True # Assignment specific cache time-to-live (TTL) in seconds. # cache_time = [oauth1] # driver = keystone.contrib.oauth1.backends.sql.OAuth1 # The Identity service may include expire attributes. # If no such attribute is included, then the token lasts indefinitely. # Specify how quickly the request token will expire (in seconds) # request_token_duration = 28800 # Specify how quickly the access token will expire (in seconds) # access_token_duration = 86400 [ssl] enable = True certfile = /etc/keystone/ssl/certs/signing_cert.pem keyfile = /etc/keystone/ssl/private/signing_key.pem ca_certs = /etc/keystone/ssl/certs/ca.pem ca_key = /etc/keystone/ssl/certs/cakey.pem key_size = 1024 valid_days = 3650 cert_required = False cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=openstack-centos65 [signing] # Deprecated in favor of provider in the [token] section # Allowed values are PKI or UUID #token_format = PKI #certfile = /etc/keystone/pki/certs/signing_cert.pem #keyfile = /etc/keystone/pki/private/signing_key.pem #ca_certs = /etc/keystone/pki/certs/cacert.pem #ca_key = /etc/keystone/pki/private/cakey.pem #key_size = 2048 #valid_days = 3650 #cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com [ldap] # url = ldap://localhost # user = dc=Manager,dc=example,dc=com # password = None # suffix = cn=example,cn=com # use_dumb_member = False # allow_subtree_delete = False # dumb_member = cn=dumb,dc=example,dc=com # Maximum results per page; a value of zero ('0') disables paging (default) # page_size = 0 # The LDAP dereferencing option for queries. This can be either 'never', # 'searching', 'always', 'finding' or 'default'. The 'default' option falls # back to using default dereferencing configured by your ldap.conf. # alias_dereferencing = default # The LDAP scope for queries, this can be either 'one' # (onelevel/singleLevel) or 'sub' (subtree/wholeSubtree) # query_scope = one # user_tree_dn = ou=Users,dc=example,dc=com # user_filter = # user_objectclass = inetOrgPerson # user_id_attribute = cn # user_name_attribute = sn # user_mail_attribute = email # user_pass_attribute = userPassword # user_enabled_attribute = enabled # user_enabled_mask = 0 # user_enabled_default = True # user_attribute_ignore = default_project_id,tenants # user_default_project_id_attribute = # user_allow_create = True # user_allow_update = True # user_allow_delete = True # user_enabled_emulation = False # user_enabled_emulation_dn = # tenant_tree_dn = ou=Projects,dc=example,dc=com # tenant_filter = # tenant_objectclass = groupOfNames # tenant_domain_id_attribute = businessCategory # tenant_id_attribute = cn # tenant_member_attribute = member # tenant_name_attribute = ou # tenant_desc_attribute = desc # tenant_enabled_attribute = enabled # tenant_attribute_ignore = # tenant_allow_create = True # tenant_allow_update = True # tenant_allow_delete = True # tenant_enabled_emulation = False # tenant_enabled_emulation_dn = # role_tree_dn = ou=Roles,dc=example,dc=com # role_filter = # role_objectclass = organizationalRole # role_id_attribute = cn # role_name_attribute = ou # role_member_attribute = roleOccupant # role_attribute_ignore = # role_allow_create = True # role_allow_update = True # role_allow_delete = True # group_tree_dn = # group_filter = # group_objectclass = groupOfNames # group_id_attribute = cn # group_name_attribute = ou # group_member_attribute = member # group_desc_attribute = desc # group_attribute_ignore = # group_allow_create = True # group_allow_update = True # group_allow_delete = True # ldap TLS options # if both tls_cacertfile and tls_cacertdir are set then # tls_cacertfile will be used and tls_cacertdir is ignored # valid options for tls_req_cert are demand, never, and allow # use_tls = False # tls_cacertfile = # tls_cacertdir = # tls_req_cert = demand # Additional attribute mappings can be used to map ldap attributes to internal # keystone attributes. This allows keystone to fulfill ldap objectclass # requirements. An example to map the description and gecos attributes to a # user's name would be: # user_additional_attribute_mapping = description:name, gecos:name # # domain_additional_attribute_mapping = # group_additional_attribute_mapping = # role_additional_attribute_mapping = # project_additional_attribute_mapping = # user_additional_attribute_mapping = [auth] methods = external,password,token,oauth1 #external = keystone.auth.plugins.external.ExternalDefault password = keystone.auth.plugins.password.Password token = keystone.auth.plugins.token.Token oauth1 = keystone.auth.plugins.oauth1.OAuth [paste_deploy] # Name of the paste configuration file that defines the available pipelines # config_file = /usr/share/keystone/keystone-dist-paste.ini From gareth at openstacker.org Sun Apr 13 14:13:52 2014 From: gareth at openstacker.org (Kun Huang) Date: Sun, 13 Apr 2014 22:13:52 +0800 Subject: [Rdo-list] RDO on multiple nodes In-Reply-To: <5344E17D.3080300@redhat.com> References: <5344E17D.3080300@redhat.com> Message-ID: btw, what will happen if I run "packstack --install-hosts=IP1,IP2,IP3"? Docs says the first node will be controller node and others compute nodes. But what about network? On Wed, Apr 9, 2014 at 1:58 PM, Matthias Runge wrote: > On 04/09/2014 07:53 AM, Kun Huang wrote: > >> Hi, >> >> Could RDO deploy OpenStack on multiple nodes now? If not, what's the >> current precess? >> >> > Hey, > > yes, that's possible. You'll find docs about this on [1]. > > Best, > Matthias > > [1] http://openstack.redhat.com/Install > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gareth at openstacker.org Sun Apr 13 19:57:49 2014 From: gareth at openstacker.org (Kun Huang) Date: Mon, 14 Apr 2014 03:57:49 +0800 Subject: [Rdo-list] [rdo] mysql access error when running packstack Message-ID: My issue is similar with http://openstack.redhat.com/forum/discussion/57/mysql-error-during-packstack-allinone-install/p1 But I have tried re-install CentOS, re-run --allinone and re-run --answer-file but it failed every time -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Sun Apr 13 23:58:42 2014 From: sgordon at redhat.com (Steve Gordon) Date: Sun, 13 Apr 2014 19:58:42 -0400 (EDT) Subject: [Rdo-list] RDO on multiple nodes In-Reply-To: References: <5344E17D.3080300@redhat.com> Message-ID: <1216052676.5973928.1397433522374.JavaMail.zimbra@redhat.com> ----- Original Message ----- > btw, what will happen if I run "packstack --install-hosts=IP1,IP2,IP3"? > > Docs says the first node will be controller node and others compute nodes. > But what about network? It doesn't create a separate network node, it lands the network control services on the controller node and the L2 agent(s) on the compute nodes. -Steve > On Wed, Apr 9, 2014 at 1:58 PM, Matthias Runge wrote: > > > On 04/09/2014 07:53 AM, Kun Huang wrote: > > > >> Hi, > >> > >> Could RDO deploy OpenStack on multiple nodes now? If not, what's the > >> current precess? > >> > >> > > Hey, > > > > yes, that's possible. You'll find docs about this on [1]. > > > > Best, > > Matthias > > > > [1] http://openstack.redhat.com/Install > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Steve Gordon, RHCE Product Manager, Red Hat Enterprise Linux OpenStack Platform Red Hat Canada (Toronto, Ontario) From gareth at openstacker.org Mon Apr 14 03:48:21 2014 From: gareth at openstacker.org (Kun Huang) Date: Mon, 14 Apr 2014 11:48:21 +0800 Subject: [Rdo-list] [rdo] mysql access error when running packstack In-Reply-To: References: Message-ID: the log said access denied for root with xxxx password. That password is set by pre-installation of packstack, so if I don't uninstall it I can't access mysql with wrong password? On Mon, Apr 14, 2014 at 7:35 AM, Arash Kaffamanesh wrote: > Did you uninstall your initial RDO deployment? > > You can use the hammer uninstall script here: > > http://openstack.redhat.com/Uninstalling_RDO > > https://github.com/marafa/openstack/blob/master/openstack-hammer-uninstall.sh > > HTH, > -Arash > > > On Sun, Apr 13, 2014 at 9:57 PM, Kun Huang wrote: > >> My issue is similar with >> http://openstack.redhat.com/forum/discussion/57/mysql-error-during-packstack-allinone-install/p1 >> >> But I have tried re-install CentOS, re-run --allinone and re-run >> --answer-file but it failed every time >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Apr 14 06:31:49 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 14 Apr 2014 12:01:49 +0530 Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> Message-ID: <20140414062902.GA10258@tesla.redhat.com> On Wed, Mar 12, 2014 at 10:41:17AM -0400, Steve Gordon wrote: > Hi all, > > I haven't been able to access the RDO forum using my Google ID for > quite some time, when I click the button on > http://openstack.redhat.com/forum/entry/signin?Target=http://openstack.redhat.com/Main_Page > I get this: > > Provider is required. UniqueID is required. The connection data > has not been verified. > > Is this a known issue or am I just special? I see this again, when I try with Google ID. Also, when I try with OpenID, I see this: "Bonk Something funky happened. Please bear with us while we iron out the kinks." Others see this too? -- /kashyap From rcritten at redhat.com Mon Apr 14 13:02:52 2014 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 14 Apr 2014 09:02:52 -0400 Subject: [Rdo-list] Glance Image list not working after Keystone SSL setup In-Reply-To: References: Message-ID: <534BDC7C.2090007@redhat.com> Devendra Gupta wrote: > Hi, > > I have configured keystone to SSL and also update the endpoint in > service catalog. Keystone operations like endpoint/tenant list working > fine. I also update glance-api.conf and glance-registry.conf files > with ssl enabled keystone details but still glance is unable to find > images. It fails with following: > > [root at openstack-centos65 glance(keystone_admin)]# glance --insecure image-list > Request returned failure status. > Invalid OpenStack Identity credentials. > > Please see attached keystone.conf, glance-api.conf and > glance-registry.conf and debug output of glance image-list and > endpoint list. The auth_uri in glance-api.conf is wrong. It should be https://openstack-centos65:5000/v2.0 If you set cafile in that section you should be able to do this without --insecure, assuming that openstack-centos65 is the CN value in the certificate subject of the keystone server. The admin_tenant_name is usually singular, service rather than services, but it can vary by how you installed things. rob From rbowen at redhat.com Mon Apr 14 13:03:20 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 14 Apr 2014 09:03:20 -0400 Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <20140414062902.GA10258@tesla.redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> Message-ID: <534BDC98.9070302@redhat.com> On 04/14/2014 02:31 AM, Kashyap Chamarthy wrote: > On Wed, Mar 12, 2014 at 10:41:17AM -0400, Steve Gordon wrote: >> Hi all, >> >> I haven't been able to access the RDO forum using my Google ID for >> quite some time, when I click the button on >> http://openstack.redhat.com/forum/entry/signin?Target=http://openstack.redhat.com/Main_Page >> I get this: >> >> Provider is required. UniqueID is required. The connection data >> has not been verified. >> >> Is this a known issue or am I just special? > I see this again, when I try with Google ID. > > Also, when I try with OpenID, I see this: > > "Bonk > Something funky happened. Please bear with us while we iron out the > kinks." > > Others see this too? > I'll have another look today. This was resolved for a while, but perhaps some more-recent update broke it again. :( --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Mon Apr 14 13:55:15 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 14 Apr 2014 09:55:15 -0400 Subject: [Rdo-list] Trove survey for summit In-Reply-To: <6A24921C-9E82-41DB-B15F-3B61955CEA9F@parelastic.com> References: <6A24921C-9E82-41DB-B15F-3B61955CEA9F@parelastic.com> Message-ID: <534BE8C3.6040305@redhat.com> If you are interested in databases with OpenStack, or with Trove specifically, and have a few minutes to fill out a survey to help these folks in their research, they're giving away $100 AMEX gift cards to one in every 100 completed surveys. --Rich -------- Original Message -------- Subject: [OpenStack Marketing] Can you help with this Date: Mon, 14 Apr 2014 13:30:57 +0000 From: Lori Cohen To: marketing at lists.openstack.org Hi, I wanted to introduce my company, Tesora. We are activity working on the Trove project, leveraging our deep database experience to increase the functionality of database as a service on OpenStack. As part of this effort, we are trying to gather information on how developers are using databases with OpenStack. We developed a short survey that we would appreciate if you could help us to promote. We intend to release the results at the OpenStack Summit with an infographic and white paper. Here is a link you can use. http://svy.mk/1ehTsqE Thanks so much, Lori Lori Cohen | Marketing Tesora Direct: 617.922.7577 Check out our blog > Twitter | Facebook | Linkedin | Google+ -------------- next part -------------- _______________________________________________ Marketing mailing list Marketing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing From dev29aug at gmail.com Mon Apr 14 14:32:59 2014 From: dev29aug at gmail.com (Devendra Gupta) Date: Mon, 14 Apr 2014 20:02:59 +0530 Subject: [Rdo-list] Glance Image list not working after Keystone SSL setup In-Reply-To: <534BDC7C.2090007@redhat.com> References: <534BDC7C.2090007@redhat.com> Message-ID: Thank you Rob. Now glance image-list is working fine with adding "insecure=True" to glance-api.conf and glance-register.conf below keystone_authtoken section. I'll also try the approach suggested by Rob for adding cafile path. I also set "insecure=True" for nova and neutron. Nova is working fine with SSL enabled keystone but neutron is still having weird issue. I am doing Google around it and I see lots of bugs related to the issue but nothing is clear if it's a bug or config issue, I am trying some workarounds but nothing seems working. When I try to do "neutron net-list", I can see error as "Authentication required" /etc/neutron/server.log shows following lines when net-list command is executed: 2014-04-15 03:50:34.947 24843 INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): openstack-centos65 2014-04-15 03:50:35.045 24843 WARNING keystoneclient.middleware.auth_token [-] Verify error: Command 'openssl' returned non-zero exit status 4 2014-04-15 03:50:35.048 24843 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token 19ecd7820e37141d83f5ff7339da6656 2014-04-15 03:50:35.050 24843 INFO keystoneclient.middleware.auth_token [-] Invalid user token - rejecting request Neutron net-list --verbose output is attached. Please let me know your inputs. Regards, Devendra Gupta On Mon, Apr 14, 2014 at 6:32 PM, Rob Crittenden wrote: > Devendra Gupta wrote: >> >> Hi, >> >> I have configured keystone to SSL and also update the endpoint in >> service catalog. Keystone operations like endpoint/tenant list working >> fine. I also update glance-api.conf and glance-registry.conf files >> with ssl enabled keystone details but still glance is unable to find >> images. It fails with following: >> >> [root at openstack-centos65 glance(keystone_admin)]# glance --insecure >> image-list >> Request returned failure status. >> Invalid OpenStack Identity credentials. >> >> Please see attached keystone.conf, glance-api.conf and >> glance-registry.conf and debug output of glance image-list and >> endpoint list. > > > The auth_uri in glance-api.conf is wrong. It should be > https://openstack-centos65:5000/v2.0 > > If you set cafile in that section you should be able to do this without > --insecure, assuming that openstack-centos65 is the CN value in the > certificate subject of the keystone server. > > The admin_tenant_name is usually singular, service rather than services, but > it can vary by how you installed things. > > rob -------------- next part -------------- [root at openstack-centos65 neutron(keystone_admin)]# neutron --verbose net-list DEBUG: neutronclient.neutron.v2_0.network.ListNetwork get_data(Namespace(columns=[], fields=[], formatter='table', page_size=None, quote_mode='nonnumeric', request_format='json', show_details=False, sort_dir=[], sort_key=[])) DEBUG: neutronclient.client REQ: curl -i https://openstack-centos65.bmc.com:35357/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "c04eb3689edb4a9a"}}}' DEBUG: neutronclient.client RESP:{'date': 'Mon, 14 Apr 2014 22:13:43 GMT', 'vary': 'X-Auth-Token', 'content-length': '9079', 'status': '200', 'content-type': 'application/json'} {"access": {"token": {"issued_at": "2014-04-14T22:13:43.779283", "expires": "2014-04-15T22:13:43Z", "id": "MIIPwQYJKoZIhvcNAQcCoIIPsjCCD64CAQExCTAHBgUrDgMCGjCCDo0GCSqGSIb3DQEHAaCCDn4Egg56eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNC0xNFQyMjoxMzo0My43NzkyODMiLCAiZXhwaXJlcyI6ICIyMDE0LTA0LTE1VDIyOjEzOjQzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogImFkbWluIHRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImIzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzQvdjIvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjY3YjJhZWIxY2FiMjQ5Yjg5YzM2ZThkNjhjN2JhYTY3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5Njk2LyIsICJpZCI6ICIwNWE2MWM2NDM2NGM0MmNmODhhYjJkYjJkYmIxY2RlNSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyIsICJpZCI6ICI3Y2FkYzE1ZjlhMGQ0YjYyYWQyMDRiNmIwNTE3ZjMyMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcl92MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIiwgImlkIjogIjExOGJiYWQ3MTlhYjQyYWE4ZTYwZDg0NTMyYWFjZTA2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAic3dpZnRfczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiIsICJpZCI6ICI0MTUzZDZlOGQ0NDE0MzM5YjU2Njk3MGRkN2U2YzI0YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzciLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3IiwgImlkIjogIjRlNGJjZGJmNzZiOTQ1ZWY4MGRmMjIzMmZjZGFmNzFlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzYvdjEvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjRhZGQzMWIzM2M2MzRhMWU4ZDIwYTA3MzhjNmQ1ZjExIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIiwgImlkIjogIjc3YWE0NWM5MWY3MDQzYmNhOWFiYWE0NmM4MzYzODJjIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImVjMiIsICJuYW1lIjogIm5vdmFfZWMyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODA4MC8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwL3YxL0FVVEhfYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAiaWQiOiAiMjc5NWNjMTQzMTFjNGZmZThhZjE3OTI1OTY1Nzk1ZDYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAvdjEvQVVUSF9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJvYmplY3Qtc3RvcmUiLCAibmFtZSI6ICJzd2lmdCJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206MzUzNTcvdjIuMCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206NTAwMC92Mi4wIiwgImlkIjogIjU0YmMzYjRjNzM2ODQ3NDk4M2IxZTcyYTc0ZDIwYWIzIiwgInB1YmxpY1VSTCI6ICJodHRwczovL29wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjVlZGYyNWY1YTI5YTQ3NGViZTE2ODBiMmFiOGY4MTk1IiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiYzM4ZWE0NDdlMmM4NDFiMmEzZDk4ZjI5NDU5YzQ5NzYiXX19fTGCAQswggEHAgEBMGcwYjELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxIzAhBgNVBAMMGm9wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIGAgk1fLxQPn+8wON3K5BJ4h+DJSORWEyClP627UGeMENsirLKzzJXODtdIOPK7eAtxcVgtXi9n4MCVUGY+ucDlM7vqPjj97U04AUssudDXDxtIhvv-E63TXlJefoeMWWSU7QBwPhJ3Z4SErXVpsGn4-xCNHkkJfeZ18Sdm2+o+Ml8=", "tenant": {"description": "admin tenant", "enabled": true, "id": "b327313ebe5d4ba29a26be463e13a4ec", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://10.129.56.230:8774/v2/b327313ebe5d4ba29a26be463e13a4ec", "region": "RegionOne", "internalURL": "http://10.129.56.230:8774/v2/b327313ebe5d4ba29a26be463e13a4ec", "id": "67b2aeb1cab249b89c36e8d68c7baa67", "publicURL": "http://10.129.56.230:8774/v2/b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://10.129.56.230:9696/", "region": "RegionOne", "internalURL": "http://10.129.56.230:9696/", "id": "05a61c64364c42cf88ab2db2dbb1cde5", "publicURL": "http://10.129.56.230:9696/"}], "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8776/v2/b327313ebe5d4ba29a26be463e13a4ec", "region": "RegionOne", "internalURL": "http://10.129.56.230:8776/v2/b327313ebe5d4ba29a26be463e13a4ec", "id": "7cadc15f9a0d4b62ad204b6b0517f320", "publicURL": "http://10.129.56.230:8776/v2/b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "volumev2", "name": "cinder_v2"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8080", "region": "RegionOne", "internalURL": "http://10.129.56.230:8080", "id": "118bbad719ab42aa8e60d84532aace06", "publicURL": "http://10.129.56.230:8080"}], "endpoints_links": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://10.129.56.230:9292", "region": "RegionOne", "internalURL": "http://10.129.56.230:9292", "id": "4153d6e8d4414339b566970dd7e6c24c", "publicURL": "http://10.129.56.230:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8777", "region": "RegionOne", "internalURL": "http://10.129.56.230:8777", "id": "4e4bcdbf76b945ef80df2232fcdaf71e", "publicURL": "http://10.129.56.230:8777"}], "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8776/v1/b327313ebe5d4ba29a26be463e13a4ec", "region": "RegionOne", "internalURL": "http://10.129.56.230:8776/v1/b327313ebe5d4ba29a26be463e13a4ec", "id": "4add31b33c634a1e8d20a0738c6d5f11", "publicURL": "http://10.129.56.230:8776/v1/b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8773/services/Admin", "region": "RegionOne", "internalURL": "http://10.129.56.230:8773/services/Cloud", "id": "77aa45c91f7043bca9abaa46c836382c", "publicURL": "http://10.129.56.230:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8080/", "region": "RegionOne", "internalURL": "http://10.129.56.230:8080/v1/AUTH_b327313ebe5d4ba29a26be463e13a4ec", "id": "2795cc14311c4ffe8af17925965795d6", "publicURL": "http://10.129.56.230:8080/v1/AUTH_b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "https://openstack-centos65.bmc.com:35357/v2.0", "region": "RegionOne", "internalURL": "https://openstack-centos65.bmc.com:5000/v2.0", "id": "54bc3b4c7368474983b1e72a74d20ab3", "publicURL": "https://openstack-centos65.bmc.com:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "5edf25f5a29a474ebe1680b2ab8f8195", "roles": [{"name": "admin"}], "name": "admin"}, "metadata": {"is_admin": 0, "roles": ["c38ea447e2c841b2a3d98f29459c4976"]}}} DEBUG: neutronclient.client REQ: curl -i http://10.129.56.230:9696/v2.0/networks.json -X GET -H "X-Auth-Token: MIIPwQYJKoZIhvcNAQcCoIIPsjCCD64CAQExCTAHBgUrDgMCGjCCDo0GCSqGSIb3DQEHAaCCDn4Egg56eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNC0xNFQyMjoxMzo0My43NzkyODMiLCAiZXhwaXJlcyI6ICIyMDE0LTA0LTE1VDIyOjEzOjQzWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogImFkbWluIHRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImIzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzQvdjIvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjY3YjJhZWIxY2FiMjQ5Yjg5YzM2ZThkNjhjN2JhYTY3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5Njk2LyIsICJpZCI6ICIwNWE2MWM2NDM2NGM0MmNmODhhYjJkYjJkYmIxY2RlNSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyIsICJpZCI6ICI3Y2FkYzE1ZjlhMGQ0YjYyYWQyMDRiNmIwNTE3ZjMyMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcl92MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIiwgImlkIjogIjExOGJiYWQ3MTlhYjQyYWE4ZTYwZDg0NTMyYWFjZTA2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAic3dpZnRfczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiIsICJpZCI6ICI0MTUzZDZlOGQ0NDE0MzM5YjU2Njk3MGRkN2U2YzI0YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzciLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3IiwgImlkIjogIjRlNGJjZGJmNzZiOTQ1ZWY4MGRmMjIzMmZjZGFmNzFlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzYvdjEvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjRhZGQzMWIzM2M2MzRhMWU4ZDIwYTA3MzhjNmQ1ZjExIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIiwgImlkIjogIjc3YWE0NWM5MWY3MDQzYmNhOWFiYWE0NmM4MzYzODJjIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImVjMiIsICJuYW1lIjogIm5vdmFfZWMyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODA4MC8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwL3YxL0FVVEhfYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAiaWQiOiAiMjc5NWNjMTQzMTFjNGZmZThhZjE3OTI1OTY1Nzk1ZDYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAvdjEvQVVUSF9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJvYmplY3Qtc3RvcmUiLCAibmFtZSI6ICJzd2lmdCJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206MzUzNTcvdjIuMCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206NTAwMC92Mi4wIiwgImlkIjogIjU0YmMzYjRjNzM2ODQ3NDk4M2IxZTcyYTc0ZDIwYWIzIiwgInB1YmxpY1VSTCI6ICJodHRwczovL29wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjVlZGYyNWY1YTI5YTQ3NGViZTE2ODBiMmFiOGY4MTk1IiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiYzM4ZWE0NDdlMmM4NDFiMmEzZDk4ZjI5NDU5YzQ5NzYiXX19fTGCAQswggEHAgEBMGcwYjELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxIzAhBgNVBAMMGm9wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIGAgk1fLxQPn+8wON3K5BJ4h+DJSORWEyClP627UGeMENsirLKzzJXODtdIOPK7eAtxcVgtXi9n4MCVUGY+ucDlM7vqPjj97U04AUssudDXDxtIhvv-E63TXlJefoeMWWSU7QBwPhJ3Z4SErXVpsGn4-xCNHkkJfeZ18Sdm2+o+Ml8=" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" DEBUG: neutronclient.client RESP:{'date': 'Mon, 14 Apr 2014 22:13:44 GMT', 'status': '401', 'content-length': '23', 'content-type': 'text/plain', 'www-authenticate': "Keystone uri='https://openstack-centos65.bmc.com:5000/'"} Authentication required DEBUG: neutronclient.client REQ: curl -i https://openstack-centos65.bmc.com:35357/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "REDACTED"}}}' DEBUG: neutronclient.client RESP:{'date': 'Mon, 14 Apr 2014 22:13:44 GMT', 'vary': 'X-Auth-Token', 'content-length': '9079', 'status': '200', 'content-type': 'application/json'} {"access": {"token": {"issued_at": "2014-04-14T22:13:44.527629", "expires": "2014-04-15T22:13:44Z", "id": "MIIPwQYJKoZIhvcNAQcCoIIPsjCCD64CAQExCTAHBgUrDgMCGjCCDo0GCSqGSIb3DQEHAaCCDn4Egg56eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNC0xNFQyMjoxMzo0NC41Mjc2MjkiLCAiZXhwaXJlcyI6ICIyMDE0LTA0LTE1VDIyOjEzOjQ0WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogImFkbWluIHRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImIzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzQvdjIvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjY3YjJhZWIxY2FiMjQ5Yjg5YzM2ZThkNjhjN2JhYTY3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5Njk2LyIsICJpZCI6ICIwNWE2MWM2NDM2NGM0MmNmODhhYjJkYjJkYmIxY2RlNSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyIsICJpZCI6ICI3Y2FkYzE1ZjlhMGQ0YjYyYWQyMDRiNmIwNTE3ZjMyMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcl92MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIiwgImlkIjogIjExOGJiYWQ3MTlhYjQyYWE4ZTYwZDg0NTMyYWFjZTA2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAic3dpZnRfczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiIsICJpZCI6ICI0MTUzZDZlOGQ0NDE0MzM5YjU2Njk3MGRkN2U2YzI0YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzciLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3IiwgImlkIjogIjRlNGJjZGJmNzZiOTQ1ZWY4MGRmMjIzMmZjZGFmNzFlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzYvdjEvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjRhZGQzMWIzM2M2MzRhMWU4ZDIwYTA3MzhjNmQ1ZjExIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIiwgImlkIjogIjc3YWE0NWM5MWY3MDQzYmNhOWFiYWE0NmM4MzYzODJjIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImVjMiIsICJuYW1lIjogIm5vdmFfZWMyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODA4MC8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwL3YxL0FVVEhfYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAiaWQiOiAiMjc5NWNjMTQzMTFjNGZmZThhZjE3OTI1OTY1Nzk1ZDYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAvdjEvQVVUSF9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJvYmplY3Qtc3RvcmUiLCAibmFtZSI6ICJzd2lmdCJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206MzUzNTcvdjIuMCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206NTAwMC92Mi4wIiwgImlkIjogIjU0YmMzYjRjNzM2ODQ3NDk4M2IxZTcyYTc0ZDIwYWIzIiwgInB1YmxpY1VSTCI6ICJodHRwczovL29wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjVlZGYyNWY1YTI5YTQ3NGViZTE2ODBiMmFiOGY4MTk1IiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiYzM4ZWE0NDdlMmM4NDFiMmEzZDk4ZjI5NDU5YzQ5NzYiXX19fTGCAQswggEHAgEBMGcwYjELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxIzAhBgNVBAMMGm9wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIGAhA5Fgx95H-D-XTObWB8+h9XdcXCzwjtVaDWJOXsxaKhEm+3kmGllUI50wjkWu-UJIDbEYANz5LKLzcQ9bMCFQEurKn5laac1b1NOGPx4IOlzE4W5QK8i+ziKh3u3b2nK2DwRIto7vQT++3ZEGMBNxWirCPChRcu1807rMM7MDdY=", "tenant": {"description": "admin tenant", "enabled": true, "id": "b327313ebe5d4ba29a26be463e13a4ec", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://10.129.56.230:8774/v2/b327313ebe5d4ba29a26be463e13a4ec", "region": "RegionOne", "internalURL": "http://10.129.56.230:8774/v2/b327313ebe5d4ba29a26be463e13a4ec", "id": "67b2aeb1cab249b89c36e8d68c7baa67", "publicURL": "http://10.129.56.230:8774/v2/b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://10.129.56.230:9696/", "region": "RegionOne", "internalURL": "http://10.129.56.230:9696/", "id": "05a61c64364c42cf88ab2db2dbb1cde5", "publicURL": "http://10.129.56.230:9696/"}], "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8776/v2/b327313ebe5d4ba29a26be463e13a4ec", "region": "RegionOne", "internalURL": "http://10.129.56.230:8776/v2/b327313ebe5d4ba29a26be463e13a4ec", "id": "7cadc15f9a0d4b62ad204b6b0517f320", "publicURL": "http://10.129.56.230:8776/v2/b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "volumev2", "name": "cinder_v2"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8080", "region": "RegionOne", "internalURL": "http://10.129.56.230:8080", "id": "118bbad719ab42aa8e60d84532aace06", "publicURL": "http://10.129.56.230:8080"}], "endpoints_links": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://10.129.56.230:9292", "region": "RegionOne", "internalURL": "http://10.129.56.230:9292", "id": "4153d6e8d4414339b566970dd7e6c24c", "publicURL": "http://10.129.56.230:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8777", "region": "RegionOne", "internalURL": "http://10.129.56.230:8777", "id": "4e4bcdbf76b945ef80df2232fcdaf71e", "publicURL": "http://10.129.56.230:8777"}], "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8776/v1/b327313ebe5d4ba29a26be463e13a4ec", "region": "RegionOne", "internalURL": "http://10.129.56.230:8776/v1/b327313ebe5d4ba29a26be463e13a4ec", "id": "4add31b33c634a1e8d20a0738c6d5f11", "publicURL": "http://10.129.56.230:8776/v1/b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8773/services/Admin", "region": "RegionOne", "internalURL": "http://10.129.56.230:8773/services/Cloud", "id": "77aa45c91f7043bca9abaa46c836382c", "publicURL": "http://10.129.56.230:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://10.129.56.230:8080/", "region": "RegionOne", "internalURL": "http://10.129.56.230:8080/v1/AUTH_b327313ebe5d4ba29a26be463e13a4ec", "id": "2795cc14311c4ffe8af17925965795d6", "publicURL": "http://10.129.56.230:8080/v1/AUTH_b327313ebe5d4ba29a26be463e13a4ec"}], "endpoints_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "https://openstack-centos65.bmc.com:35357/v2.0", "region": "RegionOne", "internalURL": "https://openstack-centos65.bmc.com:5000/v2.0", "id": "54bc3b4c7368474983b1e72a74d20ab3", "publicURL": "https://openstack-centos65.bmc.com:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "5edf25f5a29a474ebe1680b2ab8f8195", "roles": [{"name": "admin"}], "name": "admin"}, "metadata": {"is_admin": 0, "roles": ["c38ea447e2c841b2a3d98f29459c4976"]}}} DEBUG: neutronclient.client REQ: curl -i http://10.129.56.230:9696/v2.0/networks.json -X GET -H "X-Auth-Token: MIIPwQYJKoZIhvcNAQcCoIIPsjCCD64CAQExCTAHBgUrDgMCGjCCDo0GCSqGSIb3DQEHAaCCDn4Egg56eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNC0xNFQyMjoxMzo0NC41Mjc2MjkiLCAiZXhwaXJlcyI6ICIyMDE0LTA0LTE1VDIyOjEzOjQ0WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogImFkbWluIHRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImIzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzQvdjIvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjY3YjJhZWIxY2FiMjQ5Yjg5YzM2ZThkNjhjN2JhYTY3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc0L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5Njk2LyIsICJpZCI6ICIwNWE2MWM2NDM2NGM0MmNmODhhYjJkYjJkYmIxY2RlNSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YyL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyIsICJpZCI6ICI3Y2FkYzE1ZjlhMGQ0YjYyYWQyMDRiNmIwNTE3ZjMyMCIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODc3Ni92Mi9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcl92MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIiwgImlkIjogIjExOGJiYWQ3MTlhYjQyYWE4ZTYwZDg0NTMyYWFjZTA2IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAic3dpZnRfczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo5MjkyIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiIsICJpZCI6ICI0MTUzZDZlOGQ0NDE0MzM5YjU2Njk3MGRkN2U2YzI0YyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzciLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3IiwgImlkIjogIjRlNGJjZGJmNzZiOTQ1ZWY4MGRmMjIzMmZjZGFmNzFlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzYvdjEvYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIiwgImlkIjogIjRhZGQzMWIzM2M2MzRhMWU4ZDIwYTA3MzhjNmQ1ZjExIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4Nzc2L3YxL2IzMjczMTNlYmU1ZDRiYTI5YTI2YmU0NjNlMTNhNGVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIiwgImlkIjogIjc3YWE0NWM5MWY3MDQzYmNhOWFiYWE0NmM4MzYzODJjIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImVjMiIsICJuYW1lIjogIm5vdmFfZWMyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjEyOS41Ni4yMzA6ODA4MC8iLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMTI5LjU2LjIzMDo4MDgwL3YxL0FVVEhfYjMyNzMxM2ViZTVkNGJhMjlhMjZiZTQ2M2UxM2E0ZWMiLCAiaWQiOiAiMjc5NWNjMTQzMTFjNGZmZThhZjE3OTI1OTY1Nzk1ZDYiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4xMjkuNTYuMjMwOjgwODAvdjEvQVVUSF9iMzI3MzEzZWJlNWQ0YmEyOWEyNmJlNDYzZTEzYTRlYyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJvYmplY3Qtc3RvcmUiLCAibmFtZSI6ICJzd2lmdCJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206MzUzNTcvdjIuMCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHBzOi8vb3BlbnN0YWNrLWNlbnRvczY1LmJtYy5jb206NTAwMC92Mi4wIiwgImlkIjogIjU0YmMzYjRjNzM2ODQ3NDk4M2IxZTcyYTc0ZDIwYWIzIiwgInB1YmxpY1VSTCI6ICJodHRwczovL29wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjVlZGYyNWY1YTI5YTQ3NGViZTE2ODBiMmFiOGY4MTk1IiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiYzM4ZWE0NDdlMmM4NDFiMmEzZDk4ZjI5NDU5YzQ5NzYiXX19fTGCAQswggEHAgEBMGcwYjELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxIzAhBgNVBAMMGm9wZW5zdGFjay1jZW50b3M2NS5ibWMuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIGAhA5Fgx95H-D-XTObWB8+h9XdcXCzwjtVaDWJOXsxaKhEm+3kmGllUI50wjkWu-UJIDbEYANz5LKLzcQ9bMCFQEurKn5laac1b1NOGPx4IOlzE4W5QK8i+ziKh3u3b2nK2DwRIto7vQT++3ZEGMBNxWirCPChRcu1807rMM7MDdY=" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" DEBUG: neutronclient.client RESP:{'date': 'Mon, 14 Apr 2014 22:13:44 GMT', 'status': '401', 'content-length': '23', 'content-type': 'text/plain', 'www-authenticate': "Keystone uri='https://openstack-centos65.bmc.com:5000/'"} Authentication required ERROR: neutronclient.shell Authentication required DEBUG: neutronclient.shell clean_up ListNetwork DEBUG: neutronclient.shell got an error: Authentication required From kchamart at redhat.com Tue Apr 15 05:21:24 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 15 Apr 2014 10:51:24 +0530 Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <534BDC98.9070302@redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> <534BDC98.9070302@redhat.com> Message-ID: <20140415052124.GD10258@tesla.redhat.com> On Mon, Apr 14, 2014 at 09:03:20AM -0400, Rich Bowen wrote: > > On 04/14/2014 02:31 AM, Kashyap Chamarthy wrote: > >On Wed, Mar 12, 2014 at 10:41:17AM -0400, Steve Gordon wrote: > >>Hi all, > >> > >>I haven't been able to access the RDO forum using my Google ID for > >>quite some time, when I click the button on > >>http://openstack.redhat.com/forum/entry/signin?Target=http://openstack.redhat.com/Main_Page > >>I get this: > >> > >> Provider is required. UniqueID is required. The connection data > >> has not been verified. > >> > >>Is this a known issue or am I just special? > >I see this again, when I try with Google ID. > > > >Also, when I try with OpenID, I see this: > > > > "Bonk > > Something funky happened. Please bear with us while we iron out the > > kinks." > > > >Others see this too? > > > > I'll have another look today. This was resolved for a while, but > perhaps some more-recent update broke it again. :( Michael Scherer (CC'ed) tried to debug a little bit yesterday. Maybe you might want to coordinate with you if you haven't already. -- /kashyap From jamesgreenbdfedfegd at gmail.com Tue Apr 15 13:09:16 2014 From: jamesgreenbdfedfegd at gmail.com (james green) Date: Tue, 15 Apr 2014 15:09:16 +0200 Subject: [Rdo-list] From G. James , Details Attached Message-ID: Inquiry From G. James , Details Attached ENERGY & MINERAL RESOURCES JOHANNESBURG,SOUTH AFRICA. Yours Faithfully, Mr. James Green -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: INQUIRY FROM G .JAMES, Details Attached.doc Type: application/msword Size: 24576 bytes Desc: not available URL: From kchamart at redhat.com Tue Apr 15 14:07:25 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 15 Apr 2014 19:37:25 +0530 Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <20140415052124.GD10258@tesla.redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> <534BDC98.9070302@redhat.com> <20140415052124.GD10258@tesla.redhat.com> Message-ID: <20140415140725.GF10258@tesla.redhat.com> On Tue, Apr 15, 2014 at 10:51:24AM +0530, Kashyap Chamarthy wrote: > On Mon, Apr 14, 2014 at 09:03:20AM -0400, Rich Bowen wrote: > > > > On 04/14/2014 02:31 AM, Kashyap Chamarthy wrote: > > >On Wed, Mar 12, 2014 at 10:41:17AM -0400, Steve Gordon wrote: > > >>Hi all, > > >> > > >>I haven't been able to access the RDO forum using my Google ID for > > >>quite some time, when I click the button on > > >>http://openstack.redhat.com/forum/entry/signin?Target=http://openstack.redhat.com/Main_Page > > >>I get this: > > >> > > >> Provider is required. UniqueID is required. The connection data > > >> has not been verified. > > >> > > >>Is this a known issue or am I just special? > > >I see this again, when I try with Google ID. > > > > > >Also, when I try with OpenID, I see this: > > > > > > "Bonk > > > Something funky happened. Please bear with us while we iron out the > > > kinks." > > > > > >Others see this too? > > > > > > > I'll have another look today. This was resolved for a while, but > > perhaps some more-recent update broke it again. :( > > Michael Scherer (CC'ed) tried to debug a little bit yesterday. Maybe you > might want to coordinate with you if you haven't already. OpenID works now, thanks dnear and mscherer. -- /kashyap From rbowen at redhat.com Tue Apr 15 14:38:04 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 15 Apr 2014 10:38:04 -0400 (EDT) Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <20140415140725.GF10258@tesla.redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> <534BDC98.9070302@redhat.com> <20140415052124.GD10258@tesla.redhat.com> <20140415140725.GF10258@tesla.redhat.com> Message-ID: <64883969.3721698.1397572684164.JavaMail.zimbra@redhat.com> > > > > > >Also, when I try with OpenID, I see this: > > > > > > "Bonk > > > Something funky happened. Please bear with us while we iron out the > > > kinks." > > > > > >Others see this too? > > > > > > > I'll have another look today. This was resolved for a while, but > > perhaps some more-recent update broke it again. :( > > Michael Scherer (CC'ed) tried to debug a little bit yesterday. Maybe you > might want to coordinate with you if you haven't already. OpenID works now, thanks dnear and mscherer. So, what did the solution end up being? From dneary at redhat.com Tue Apr 15 16:13:11 2014 From: dneary at redhat.com (Dave Neary) Date: Tue, 15 Apr 2014 18:13:11 +0200 Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <64883969.3721698.1397572684164.JavaMail.zimbra@redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> <534BDC98.9070302@redhat.com> <20140415052124.GD10258@tesla.redhat.com> <20140415140725.GF10258@tesla.redhat.com> <64883969.3721698.1397572684164.JavaMail.zimbra@redhat.com> Message-ID: <534D5A97.3040700@redhat.com> Hi, On 04/15/2014 04:38 PM, Rich Bowen wrote: >> OpenID works now, thanks dnear and mscherer. > > > So, what did the solution end up being? Misc moved OpenID_BAK out of the vanilla/plugins directory, which removed OpenID altogether, and then I changed $Configuration['EnabledPlugins']['OpenID_BAK'] = TRUE; to $Configuration['EnabledPlugins']['OpenID'] = TRUE; in vanilla/conf/config.php However, Google ID log-in still isn't working. Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From rbowen at redhat.com Tue Apr 15 17:17:06 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 15 Apr 2014 13:17:06 -0400 (EDT) Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <534D5A97.3040700@redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> <534BDC98.9070302@redhat.com> <20140415052124.GD10258@tesla.redhat.com> <20140415140725.GF10258@tesla.redhat.com> <64883969.3721698.1397572684164.JavaMail.zimbra@redhat.com> <534D5A97.3040700@redhat.com> Message-ID: <1973729907.3774550.1397582226417.JavaMail.zimbra@redhat.com> It's working fine for me. What's happening when you try? --Rich ----- Original Message ----- From: "Dave Neary" To: rdo-list at redhat.com Sent: Tuesday, April 15, 2014 12:13:11 PM Subject: Re: [Rdo-list] RDO Forum OpenID Hi, On 04/15/2014 04:38 PM, Rich Bowen wrote: >> OpenID works now, thanks dnear and mscherer. > > > So, what did the solution end up being? Misc moved OpenID_BAK out of the vanilla/plugins directory, which removed OpenID altogether, and then I changed $Configuration['EnabledPlugins']['OpenID_BAK'] = TRUE; to $Configuration['EnabledPlugins']['OpenID'] = TRUE; in vanilla/conf/config.php However, Google ID log-in still isn't working. Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From rbowen at redhat.com Tue Apr 15 18:10:28 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 15 Apr 2014 14:10:28 -0400 (EDT) Subject: [Rdo-list] Happy birthday RDO! In-Reply-To: <912724070.3791458.1397585405005.JavaMail.zimbra@redhat.com> Message-ID: <594921423.3791493.1397585428230.JavaMail.zimbra@redhat.com> (From http://community.redhat.com/blog/2014/04/happy-birthday-to-rdo/ ) A year ago today ? April 15, 2013 ? we announced the RDO effort. We started the project in an effort to make it less painful to deploy an OpenStack cloud on CentOS, Fedora, or Red Hat Enterprise Linux. A lot has changed over the past year, but our mission remains the same. We?ve grown from a handful of people to more than 2,000 people registered on the forum and wiki. We?ve also got a following on Twitter and Google+, and an active group of experts fielding questions on the ask.openstack.org Q&A website. In that time, Red Hat participation in the OpenStack project has grown, too, with Red Hat being the top contributor to the OpenStack code in the Folsom, Grizzly, Havana and (so far) Icehouse releases. In the coming year, we?re looking forward to Juno and whatever the K release ends up being called, and the innovation around cloud computing. We?re hoping for ? and working toward ? big strides in ease of deployment and configuration, better metering, and all sorts of other improvements. Want to get the latest on RDO? Follow us on Twitter @RDOCommunity. Come join us as we help create the future of cloud computing. From dneary at redhat.com Wed Apr 16 08:38:32 2014 From: dneary at redhat.com (Dave Neary) Date: Wed, 16 Apr 2014 10:38:32 +0200 Subject: [Rdo-list] RDO Forum OpenID In-Reply-To: <1973729907.3774550.1397582226417.JavaMail.zimbra@redhat.com> References: <897145236.33674511.1394635218978.JavaMail.zimbra@redhat.com> <926416416.33675734.1394635277996.JavaMail.zimbra@redhat.com> <20140414062902.GA10258@tesla.redhat.com> <534BDC98.9070302@redhat.com> <20140415052124.GD10258@tesla.redhat.com> <20140415140725.GF10258@tesla.redhat.com> <64883969.3721698.1397572684164.JavaMail.zimbra@redhat.com> <534D5A97.3040700@redhat.com> <1973729907.3774550.1397582226417.JavaMail.zimbra@redhat.com> Message-ID: <534E4188.7090401@redhat.com> Hi, I get taken to a page where I am asked to input an OpenID identifier URL. Dave. On 04/15/2014 07:17 PM, Rich Bowen wrote: > It's working fine for me. What's happening when you try? > > --Rich > > > > ----- Original Message ----- > From: "Dave Neary" > To: rdo-list at redhat.com > Sent: Tuesday, April 15, 2014 12:13:11 PM > Subject: Re: [Rdo-list] RDO Forum OpenID > > Hi, > > On 04/15/2014 04:38 PM, Rich Bowen wrote: >>> OpenID works now, thanks dnear and mscherer. >> >> >> So, what did the solution end up being? > > Misc moved OpenID_BAK out of the vanilla/plugins directory, which > removed OpenID altogether, and then I changed > > $Configuration['EnabledPlugins']['OpenID_BAK'] = TRUE; > > to > > $Configuration['EnabledPlugins']['OpenID'] = TRUE; > > in vanilla/conf/config.php > > However, Google ID log-in still isn't working. > > Cheers, > Dave. > -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From pbrady at redhat.com Wed Apr 16 10:13:14 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 16 Apr 2014 11:13:14 +0100 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.3 update Message-ID: <534E57BA.7010809@redhat.com> The RDO Havana repositories were updated with the latest stable 2013.2.3 update Details of the changes can be drilled down to from: https://launchpad.net/nova/havana/2013.2.3 https://launchpad.net/glance/havana/2013.2.3 https://launchpad.net/horizon/havana/2013.2.3 https://launchpad.net/keystone/havana/2013.2.3 https://launchpad.net/cinder/havana/2013.2.3 https://launchpad.net/neutron/havana/2013.2.3 https://launchpad.net/ceilometer/havana/2013.2.3 https://launchpad.net/heat/havana/2013.2.3 In addition there have been these client rebases: python-keystoneclient from 0.4.1 to 0.7.1 python-neutronclient from 2.3.1 to 2.3.4 thanks, P?draig. From ALONMA at il.ibm.com Wed Apr 16 19:04:25 2014 From: ALONMA at il.ibm.com (Alon Marx) Date: Wed, 16 Apr 2014 22:04:25 +0300 Subject: [Rdo-list] AUTO: Alon Marx is out of the office (returning 04/20/2014) Message-ID: I am out of the office until 04/20/2014. I will be out of the office August 9 to August 12. I will have no email connectivity and no phone connectivity. For any urgent matters please contact Ohad Atia. Note: This is an automated response to your message "Rdo-list Digest, Vol 13, Issue 13" sent on 16/04/2014 19:00:02. This is the only notification you will receive while this person is away. From rdo-info at redhat.com Wed Apr 16 19:38:27 2014 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 16 Apr 2014 19:38:27 +0000 Subject: [Rdo-list] [RDO] Stable Havana 2013.2.3 update Message-ID: <000001456c0c29ad-cdf671f9-82db-4198-beae-a43de5b6ecce-000000@email.amazonses.com> rbowen started a discussion. Stable Havana 2013.2.3 update --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/972/stable-havana-2013-2-3-update Have a great day! From rdo-info at redhat.com Thu Apr 17 03:46:20 2014 From: rdo-info at redhat.com (RDO Forum) Date: Thu, 17 Apr 2014 03:46:20 +0000 Subject: [Rdo-list] [RDO] The Giants are also hanging tight, although it's getting late for them to be more than a game or Message-ID: <000001456dcad588-b08f61db-ecf8-4223-985e-327509912884-000000@email.amazonses.com> FMarrero started a discussion. The Giants are also hanging tight, although it's getting late for them to be more than a game or --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/973/the-giants-are-also-hanging-tight-although-its-getting-late-for-them-to-be-more-than-a-game-or Have a great day! From rbowen at rcbowen.com Thu Apr 17 13:19:11 2014 From: rbowen at rcbowen.com (Rich Bowen) Date: Thu, 17 Apr 2014 09:19:11 -0400 Subject: [Rdo-list] Icehouse update webcast Message-ID: <534FD4CF.1000309@rcbowen.com> As you no doubt know, Icehouse has been released today. There's going to be a webcast later today covering what's new in Icehouse, and talking about the user survey results: https://www.brighttalk.com/webcast/499/107965 --RIch -- Rich Bowen - rbowen at rcbowen.com - @rbowen http://apachecon.com/ - @apachecon From kchamart at redhat.com Mon Apr 21 11:30:15 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 21 Apr 2014 17:00:15 +0530 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO Message-ID: <20140421113015.GB29524@tesla.redhat.com> Heya, I'm trying to write some docs for setting up High Availability (HA) with RDO. I should note that Fabio Dinitto did most of the work related to HA and he has some notes here[1][2]. I'm trying to make them useful for RDO users. Deriving from Fabio's work, I've posted Nova, Neutron, Cinder, Glance, Keystone docs here[3][4][5][6][7]. AIUI, these configurations are primarily based on Havana-based setup, and may need some further tweaks if these are being used for IceHouse-based setups. Fabio and other folks who have more HA knowledge, please comment and correct me if I said something wrong. [1] https://github.com/fabbione/rhos-ha-deploy.git [2] http://openstack.redhat.com/HA_Architecture [3] http://openstack.redhat.com/Setting-up-HA-of-Nova [4] http://openstack.redhat.com/Setting-up-HA-of-Neutron [5] http://openstack.redhat.com/Setting-up-HA-of-Cinder [6] http://openstack.redhat.com/Setting-up-HA-of-Glance [7] http://openstack.redhat.com/Setting-up-HA-of-Keystone -- /kashyap From rohara at redhat.com Mon Apr 21 13:37:47 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Mon, 21 Apr 2014 08:37:47 -0500 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140421113015.GB29524@tesla.redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> Message-ID: <20140421133747.GA32137@redhat.com> On Mon, Apr 21, 2014 at 05:00:15PM +0530, Kashyap Chamarthy wrote: > Heya, > > I'm trying to write some docs for setting up High Availability (HA) with > RDO. I should note that Fabio Dinitto did most of the work related to > HA and he has some notes here[1][2]. > > I'm trying to make them useful for RDO users. Deriving from Fabio's > work, I've posted Nova, Neutron, Cinder, Glance, Keystone docs > here[3][4][5][6][7]. AIUI, these configurations are primarily based on > Havana-based setup, and may need some further tweaks if these are being > used for IceHouse-based setups. > > Fabio and other folks who have more HA knowledge, please comment and > correct me if I said something wrong. Are you intentionally leaving out HAProxy? This was a core component of the HA architecture we came up with. Ryan > [1] https://github.com/fabbione/rhos-ha-deploy.git > [2] http://openstack.redhat.com/HA_Architecture > [3] http://openstack.redhat.com/Setting-up-HA-of-Nova > [4] http://openstack.redhat.com/Setting-up-HA-of-Neutron > [5] http://openstack.redhat.com/Setting-up-HA-of-Cinder > [6] http://openstack.redhat.com/Setting-up-HA-of-Glance > [7] http://openstack.redhat.com/Setting-up-HA-of-Keystone > > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From kchamart at redhat.com Mon Apr 21 13:49:46 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 21 Apr 2014 19:19:46 +0530 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140421133747.GA32137@redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <20140421133747.GA32137@redhat.com> Message-ID: <20140421134946.GB14219@tesla.redhat.com> On Mon, Apr 21, 2014 at 08:37:47AM -0500, Ryan O'Hara wrote: > On Mon, Apr 21, 2014 at 05:00:15PM +0530, Kashyap Chamarthy wrote: > > Heya, > > > > I'm trying to write some docs for setting up High Availability (HA) with > > RDO. I should note that Fabio Dinitto did most of the work related to > > HA and he has some notes here[1][2]. > > > > I'm trying to make them useful for RDO users. Deriving from Fabio's > > work, I've posted Nova, Neutron, Cinder, Glance, Keystone docs > > here[3][4][5][6][7]. AIUI, these configurations are primarily based on > > Havana-based setup, and may need some further tweaks if these are being > > used for IceHouse-based setups. > > > > Fabio and other folks who have more HA knowledge, please comment and > > correct me if I said something wrong. > > Are you intentionally leaving out HAProxy? No, I didn't. It's just an in-progress work, so it'll be added too. By HAProxy I presume you're referring to this[1]. It'll also be added to the wikis. I'll try to make a high-level document which will include references to all the wiki pages. /kashyap > This was a core component > of the HA architecture we came up with. [1] https://github.com/fabbione/rhos-ha-deploy/blob/master/rhos4/mrgcloud-setup/RHOS-RHEL-HA-how-to-mrgcloud-lb-latest.txt -- /kashyap From kchamart at redhat.com Mon Apr 21 13:51:28 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 21 Apr 2014 19:21:28 +0530 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <53551BB3.5000703@redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <53551BB3.5000703@redhat.com> Message-ID: <20140421135128.GC14219@tesla.redhat.com> On Mon, Apr 21, 2014 at 03:22:59PM +0200, Fabio M. Di Nitto wrote: > On 04/21/2014 01:30 PM, Kashyap Chamarthy wrote: > > Heya, > > > > I'm trying to write some docs for setting up High Availability (HA) with > > RDO. I should note that Fabio Dinitto did most of the work related to > > HA and he has some notes here[1][2]. > > > > I'm trying to make them useful for RDO users. Deriving from Fabio's > > work, I've posted Nova, Neutron, Cinder, Glance, Keystone docs > > here[3][4][5][6][7]. AIUI, these configurations are primarily based on > > Havana-based setup, and may need some further tweaks if these are being > > used for IceHouse-based setups. [. . .] > Icehouse will come later on on specific how-tos. Thanks for the info. -- /kashyap From rdo-info at redhat.com Mon Apr 21 15:16:50 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 21 Apr 2014 15:16:50 +0000 Subject: [Rdo-list] [RDO] Just want to say Hi! Message-ID: <0000014584dc70ed-dde1c58b-e7ee-4c3a-a3de-8b4118b00f51-000000@email.amazonses.com> JRichie started a discussion. Just want to say Hi! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/974/just-want-to-say-hi Have a great day! From kchamart at redhat.com Tue Apr 22 06:26:49 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 22 Apr 2014 11:56:49 +0530 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140421134946.GB14219@tesla.redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <20140421133747.GA32137@redhat.com> <20140421134946.GB14219@tesla.redhat.com> Message-ID: <20140422062649.GE14219@tesla.redhat.com> On Mon, Apr 21, 2014 at 07:19:46PM +0530, Kashyap Chamarthy wrote: > On Mon, Apr 21, 2014 at 08:37:47AM -0500, Ryan O'Hara wrote: > > On Mon, Apr 21, 2014 at 05:00:15PM +0530, Kashyap Chamarthy wrote: [. . .] > > Are you intentionally leaving out HAProxy? > > No, I didn't. It's just an in-progress work, so it'll be added too. Here we go -- http://openstack.redhat.com/Setting-up-HAProxy-Load-Balancer Please feel free to correct/comment if you see something off the mark. > I'll try to make a high-level document which will include references to > all the wiki pages. http://openstack.redhat.com/Setting-up-High-Availability -- /kashyap From kchamart at redhat.com Tue Apr 22 07:26:22 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 22 Apr 2014 12:56:22 +0530 Subject: [Rdo-list] RDO bug statistics Message-ID: <20140422072622.GF14219@tesla.redhat.com> Heya, As part of RDO bug scrub, here's the current status of affairs: -------------------------------------------------- - NEW, ASSIGNED, or ON_DEV, total : 180 bugs - Since 20-MAR-2014 : 45 bugs - MODFIEID, POST, or ON_QA, total : 104 bugs - Since 20-MAR-2014 : 12 bugs - VERIFIED : 5 bugs -------------------------------------------------- Here's[1] current the list of bugs, plain text formatted. Last report to the list[2]. [1] http://kashyapc.fedorapeople.org/virt/openstack/bugzilla/rdo-bug-status/all-rdo-bugs-22-04-2014.txt [2] https://www.redhat.com/archives/rdo-list/2014-March/msg00082.html -- /kashyap From brad at redhat.com Tue Apr 22 13:20:11 2014 From: brad at redhat.com (Brad P. Crochet) Date: Tue, 22 Apr 2014 09:20:11 -0400 Subject: [Rdo-list] Puppet 3.5.1 breaks RDO Foreman Installs Message-ID: <20140422132011.GC16391@redhat.com> Pupppet Labs released into their repo (http://yum.puppetlabs.com) a 3.5.1 version sometime last week. This version renders the Foreman install inoperable. Thanks to a catch by Crag, it was discovered. I have tested both 3.2.4 and 3.4.3 (using yum-plugin-versionlock), and it works with those versions. We currently have in openstack-foreman-installer: Requires: puppet >= 2.7 It seems we have a number of options to fix this: 1) Make the current Astapor codebase compatible with 3.5.1, hopefully without breaking current compatibility. 2) Require a version <= 3.4.3 3) Remove the puppetlabs repos from rdo-release, and rely on the puppet from EPEL/Fedora. I would say these options are not necessarily mutually exclusive. This affects both Havana and Icehouse. Brad -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: not available URL: From mmosesohn at mirantis.com Tue Apr 22 13:26:52 2014 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Tue, 22 Apr 2014 17:26:52 +0400 Subject: [Rdo-list] Puppet 3.5.1 breaks RDO Foreman Installs In-Reply-To: <20140422132011.GC16391@redhat.com> References: <20140422132011.GC16391@redhat.com> Message-ID: Hi Brad, You're best off limiting to puppetversion < 3.5 just to be safe. There are a large number of bugs fixed in scoping (that were features before) in Puppet 3.5. You probably need functional deployment now, so it's best to just cut off the flow of 3.5 bugs until you've had time to refactor for the new puppet release. Best Regards, Matthew Mosesohn On Tue, Apr 22, 2014 at 5:20 PM, Brad P. Crochet wrote: > Pupppet Labs released into their repo (http://yum.puppetlabs.com) a > 3.5.1 version sometime last week. > This version renders the Foreman install inoperable. Thanks to a catch > by Crag, it was discovered. > I have tested both 3.2.4 and 3.4.3 (using yum-plugin-versionlock), and > it works with those versions. > > We currently have in openstack-foreman-installer: > > Requires: puppet >= 2.7 > > It seems we have a number of options to fix this: > > 1) Make the current Astapor codebase compatible with 3.5.1, hopefully > without breaking current compatibility. > 2) Require a version <= 3.4.3 > 3) Remove the puppetlabs repos from rdo-release, and rely on the > puppet from EPEL/Fedora. > > I would say these options are not necessarily mutually exclusive. > > This affects both Havana and Icehouse. > > Brad > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohara at redhat.com Tue Apr 22 13:34:58 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Tue, 22 Apr 2014 08:34:58 -0500 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140422062649.GE14219@tesla.redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <20140421133747.GA32137@redhat.com> <20140421134946.GB14219@tesla.redhat.com> <20140422062649.GE14219@tesla.redhat.com> Message-ID: <20140422133458.GC4960@redhat.com> On Tue, Apr 22, 2014 at 11:56:49AM +0530, Kashyap Chamarthy wrote: > On Mon, Apr 21, 2014 at 07:19:46PM +0530, Kashyap Chamarthy wrote: > > On Mon, Apr 21, 2014 at 08:37:47AM -0500, Ryan O'Hara wrote: > > > On Mon, Apr 21, 2014 at 05:00:15PM +0530, Kashyap Chamarthy wrote: > > [. . .] > > > > Are you intentionally leaving out HAProxy? > > > > No, I didn't. It's just an in-progress work, so it'll be added too. > > Here we go -- http://openstack.redhat.com/Setting-up-HAProxy-Load-Balancer > > Please feel free to correct/comment if you see something off the mark. Need to set client/server timeouts for the qpid proxy. This should be no less that 60, but depends on your qpid heartbeat. 120s is probably a good starting point. I think we need to collapse some of the wiki pages. There are now 2-3 docs describing how to use haproxy with OpenStack. > > I'll try to make a high-level document which will include references to > > all the wiki pages. > > http://openstack.redhat.com/Setting-up-High-Availability OK. There is also an old(er) doc about how to do this which is probably obsolete. Ryan From rohara at redhat.com Tue Apr 22 15:29:59 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Tue, 22 Apr 2014 10:29:59 -0500 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140422133458.GC4960@redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <20140421133747.GA32137@redhat.com> <20140421134946.GB14219@tesla.redhat.com> <20140422062649.GE14219@tesla.redhat.com> <20140422133458.GC4960@redhat.com> Message-ID: <20140422152958.GH4960@redhat.com> On Tue, Apr 22, 2014 at 08:34:58AM -0500, Ryan O'Hara wrote: > On Tue, Apr 22, 2014 at 11:56:49AM +0530, Kashyap Chamarthy wrote: > > On Mon, Apr 21, 2014 at 07:19:46PM +0530, Kashyap Chamarthy wrote: > > > On Mon, Apr 21, 2014 at 08:37:47AM -0500, Ryan O'Hara wrote: > > > > On Mon, Apr 21, 2014 at 05:00:15PM +0530, Kashyap Chamarthy wrote: > > > > [. . .] > > > > > > Are you intentionally leaving out HAProxy? > > > > > > No, I didn't. It's just an in-progress work, so it'll be added too. > > > > Here we go -- http://openstack.redhat.com/Setting-up-HAProxy-Load-Balancer > > > > Please feel free to correct/comment if you see something off the mark. One additional comment. The horizon proxy should use http mode and insert a cookie for persistence. The load balancer host group in astapor currently does this. Ryan > Need to set client/server timeouts for the qpid proxy. This should be > no less that 60, but depends on your qpid heartbeat. 120s is probably > a good starting point. > > I think we need to collapse some of the wiki pages. There are now 2-3 > docs describing how to use haproxy with OpenStack. > > > > I'll try to make a high-level document which will include references to > > > all the wiki pages. > > > > http://openstack.redhat.com/Setting-up-High-Availability > > OK. There is also an old(er) doc about how to do this which is > probably obsolete. > > Ryan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From kchamart at redhat.com Tue Apr 22 17:41:29 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 22 Apr 2014 23:11:29 +0530 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140422133458.GC4960@redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <20140421133747.GA32137@redhat.com> <20140421134946.GB14219@tesla.redhat.com> <20140422062649.GE14219@tesla.redhat.com> <20140422133458.GC4960@redhat.com> Message-ID: <20140422174129.GA2813@tesla.redhat.com> On Tue, Apr 22, 2014 at 08:34:58AM -0500, Ryan O'Hara wrote: > On Tue, Apr 22, 2014 at 11:56:49AM +0530, Kashyap Chamarthy wrote: [. . .] > > Please feel free to correct/comment if you see something off the mark. > > Need to set client/server timeouts for the qpid proxy. This should be > no less that 60, but depends on your qpid heartbeat. 120s is probably > a good starting point. /me hasn't configured haproxy before, just wondering, I assume this is section in haproxy.conf you're referring to where I set the qpid timeout: [. . .] frontend vip-qpid bind 192.168.16.201:5672 default_backend qpid-vms [. . .] > > I think we need to collapse some of the wiki pages. There are now 2-3 > docs describing how to use haproxy with OpenStack. Can you please post them here if you have them handy? > > > I'll try to make a high-level document which will include > > > references to all the wiki pages. > > > > http://openstack.redhat.com/Setting-up-High-Availability > > OK. There is also an old(er) doc about how to do this which is > probably obsolete. Yeah, if you find them, it'd be useful to mark them as obsolete, just to avoid confusion. Thanks for the comments, Ryan. -- /kashyap From rohara at redhat.com Tue Apr 22 17:55:23 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Tue, 22 Apr 2014 12:55:23 -0500 Subject: [Rdo-list] Notes for setting up HA of OpenStack services with RDO In-Reply-To: <20140422174129.GA2813@tesla.redhat.com> References: <20140421113015.GB29524@tesla.redhat.com> <20140421133747.GA32137@redhat.com> <20140421134946.GB14219@tesla.redhat.com> <20140422062649.GE14219@tesla.redhat.com> <20140422133458.GC4960@redhat.com> <20140422174129.GA2813@tesla.redhat.com> Message-ID: <20140422175523.GK4960@redhat.com> On Tue, Apr 22, 2014 at 11:11:29PM +0530, Kashyap Chamarthy wrote: > On Tue, Apr 22, 2014 at 08:34:58AM -0500, Ryan O'Hara wrote: > > On Tue, Apr 22, 2014 at 11:56:49AM +0530, Kashyap Chamarthy wrote: > > [. . .] > > > > Please feel free to correct/comment if you see something off the mark. > > > > Need to set client/server timeouts for the qpid proxy. This should be > > no less that 60, but depends on your qpid heartbeat. 120s is probably > > a good starting point. > > /me hasn't configured haproxy before, just wondering, I assume this is > section in haproxy.conf you're referring to where I set the qpid > timeout: > > [. . .] > frontend vip-qpid > bind 192.168.16.201:5672 > default_backend qpid-vms > [. . .] Yes. That is half of it, at least. The default client/server timeout is 10s, set in the "defaults" section. You can override that per proxy. You want the client timeout in the frontend and the server timeout in the backend. Actually, the mysql proxy in your wiki page does this, setting the timeouts to 90s. > > I think we need to collapse some of the wiki pages. There are now 2-3 > > docs describing how to use haproxy with OpenStack. > > Can you please post them here if you have them handy? This is the one I wrote a while back. It is desperately in need of an update. http://openstack.redhat.com/Load_Balance_OpenStack_API > > > > I'll try to make a high-level document which will include > > > > references to all the wiki pages. > > > > > > http://openstack.redhat.com/Setting-up-High-Availability > > > > OK. There is also an old(er) doc about how to do this which is > > probably obsolete. > > Yeah, if you find them, it'd be useful to mark them as obsolete, just to > avoid confusion. This is what I was thinking of: http://openstack.redhat.com/RDO_HighlyAvailable_and_LoadBalanced_Control_Services > Thanks for the comments, Ryan. Sure thing. Ryan From pbrady at redhat.com Tue Apr 22 21:08:50 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 22 Apr 2014 22:08:50 +0100 Subject: [Rdo-list] [rhos-dev] Puppet 3.5.1 breaks RDO Foreman Installs In-Reply-To: <20140422132011.GC16391@redhat.com> References: <20140422132011.GC16391@redhat.com> Message-ID: <5356DA62.8060107@redhat.com> On 04/22/2014 02:20 PM, Brad P. Crochet wrote: > Pupppet Labs released into their repo (http://yum.puppetlabs.com) a > 3.5.1 version sometime last week. > This version renders the Foreman install inoperable. Thanks to a catch > by Crag, it was discovered. > I have tested both 3.2.4 and 3.4.3 (using yum-plugin-versionlock), and > it works with those versions. > > We currently have in openstack-foreman-installer: > > Requires: puppet >= 2.7 > > It seems we have a number of options to fix this: > > 1) Make the current Astapor codebase compatible with 3.5.1, hopefully > without breaking current compatibility. > 2) Require a version <= 3.4.3 > 3) Remove the puppetlabs repos from rdo-release, and rely on the > puppet from EPEL/Fedora. > > I would say these options are not necessarily mutually exclusive. > > This affects both Havana and Icehouse. We could take different approaches in Havana and Icehouse. Havana could add the cap on puppet < 3.5. This would be best done in the openstack-puppet-modules package to cater for both foreman and packstack. Icehouse could update to using foreman 1.5 which is compat with the new puppet. foreman 1.5 is available in the standard locations and due for official release soon. Note that would involve pulling in ruby193 software collection on el6. thanks, P?draig. From ALLAN.L.ST.GEORGE at leidos.com Wed Apr 23 13:04:59 2014 From: ALLAN.L.ST.GEORGE at leidos.com (St. George, Allan L.) Date: Wed, 23 Apr 2014 13:04:59 +0000 Subject: [Rdo-list] Updating puppet? Message-ID: I'm noticing the chatter regarding the new version of puppet and I checked my servers... My multi-node stack (deployed via foreman) is currently running puppet version 3.2.4; How can I get my stack upgraded to the newer puppet version 3.4.3? Again all nodes are being managed via foreman, so I assume that I would need to upgrade my foreman server to deploy the changes out. But when I attempt a "yum update", it is noted that there is nothing available to update. The server was registered and pooled as directed by the installation instructions. [root at foreman ~]# yum repolist Loaded plugins: priorities, product-id, subscription-manager This system is receiving updates from Red Hat Subscription Management. rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 rhel-6-server-openstack-4.0-rpms | 2.8 kB 00:00 rhel-6-server-rhev-agent-rpms | 3.1 kB 00:00 rhel-6-server-rpms | 3.7 kB 00:00 rhel-ha-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-lb-for-rhel-6-server-rpms | 3.7 kB 00:00 repo id repo name status rhel-6-server-cf-tools-1-rpms Red Hat CloudForms Tools for RHEL 6 (RPMs) 31 rhel-6-server-openstack-4.0-rpms Red Hat OpenStack 4.0 (RPMs) 653 rhel-6-server-rhev-agent-rpms Red Hat Enterprise Virtualization Agents for RHEL 6 Server (RPMs) 44 rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 12,461 rhel-ha-for-rhel-6-server-rpms Red Hat Enterprise Linux High Availability (for RHEL 6 Server) (RPMs) 365 rhel-lb-for-rhel-6-server-rpms Red Hat Enterprise Linux Load Balancer (for RHEL 6 Server) (RPMs) 15 [root at foreman ~]# yum update Loaded plugins: priorities, product-id, subscription-manager This system is receiving updates from Red Hat Subscription Management. rhel-6-server-cf-tools-1-rpms | 2.8 kB 00:00 rhel-6-server-openstack-4.0-rpms | 2.8 kB 00:00 rhel-6-server-rhev-agent-rpms | 3.1 kB 00:00 rhel-6-server-rpms | 3.7 kB 00:00 rhel-ha-for-rhel-6-server-rpms | 3.7 kB 00:00 rhel-lb-for-rhel-6-server-rpms | 3.7 kB 00:00 Setting up Update Process No Packages marked for Update [root at foreman ~]# puppet --version 3.2.4 Any assistance would be appreciated. Thanx! V/R, Allan St. George -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Wed Apr 23 13:47:14 2014 From: pbrady at redhat.com (=?windows-1252?Q?P=E1draig_Brady?=) Date: Wed, 23 Apr 2014 14:47:14 +0100 Subject: [Rdo-list] Updating puppet? In-Reply-To: References: Message-ID: <5357C462.1030809@redhat.com> On 04/23/2014 02:04 PM, St. George, Allan L. wrote: > I?m noticing the chatter regarding the new version of puppet and I checked my servers? The chatter was about RDO not RHOSP. > My multi-node stack (deployed via foreman) is currently running puppet version 3.2.4; How can I get my stack upgraded to the newer puppet version 3.4.3? Currently RHOS4 includes/supports puppet 3.2.4 If you have a specific need you could enable the puppetlabs repos yourself and update, though that would then no longer be supported. The best approach would be to file a support request detailing the need for the update, and it can be considered. thanks, P?draig. From rdo-info at redhat.com Wed Apr 23 13:52:41 2014 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 23 Apr 2014 13:52:41 +0000 Subject: [Rdo-list] [RDO] What's new in the OpenStack Heat Icehouse release - Hangout Message-ID: <000001458edc1d8b-b61c1012-7528-4b96-9502-4027662677f6-000000@email.amazonses.com> rbowen started a discussion. What's new in the OpenStack Heat Icehouse release - Hangout --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/975/whats-new-in-the-openstack-heat-icehouse-release-hangout Have a great day! From pbrady at redhat.com Wed Apr 23 16:34:38 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 23 Apr 2014 17:34:38 +0100 Subject: [Rdo-list] [package announce] openstack-keystone security update Message-ID: <5357EB9E.9020405@redhat.com> Havana RDO openstack-keystone packages have been updated to openstack-keystone-2013.2.3-3 - Fix denial of service via V3 API authentication chaining https://access.redhat.com/security/cve/CVE-2014-2828 From rbowen at redhat.com Wed Apr 23 17:15:37 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 23 Apr 2014 13:15:37 -0400 Subject: [Rdo-list] OpenStack Israel call for papers closes one week from today Message-ID: <5357F539.8050508@redhat.com> The OpenStack Israel (June 2, 2014) call for papers closes on Wednesday, April 30th. Get your talks in soon! http://www.openstack-israel.org/#!copy-of-call-for-papers/cu3y OpenStack Israel has grown from a small gathering to over 300 last year, and it's definitely the place to be - I wish I could be there. If you're nearby you should definitely make an effort to attend. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From weiler at soe.ucsc.edu Wed Apr 23 23:40:15 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Wed, 23 Apr 2014 16:40:15 -0700 Subject: [Rdo-list] Glance problems... Message-ID: <53584F5F.6000004@soe.ucsc.edu> Hi Y'all, I was able to set up RDO Openstack just fine with Icehouse RC1, and then I wiped it out and am trying again with the official stable release (2014.1) and am having weird problems. It seems there were many changes between this and RC1 unless I'm mistaken. The main issue I'm having now is that I can't seem to create the glance database properly, and I was able to do this before no problem. I do: $ mysql -u root -p mysql> CREATE DATABASE glance; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'GLANCE_DBPASS'; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'GLANCE_DBPASS'; (Obviously 'GLANCE_DBPASS' is replaced with the real password). Then: su -s /bin/sh -c "glance-manage db_sync" glance And it creates the 'glance' database and only one table, "migrate_version". I can't get it to create the rest of the tables it needs. I've tried also: openstack-db --init --service glance --password GLANCE_DBPASS And that returned success but in reality nothing happened... Any idea what's going on? In the api.conf and registry.conf the correct database credentials are listed, and I can connect to the database as the mysql glance user on the command line just fine using those credentials. When I run any glance commands I get this in the registry log: ProgrammingError: (ProgrammingError) (1146, "Table 'glance.images' doesn't exist") 'SELECT anon_1.anon_2_images_created_at AS anon_1_anon_2_images_created_at, anon_1.anon_2_images_updated_at AS anon_1_anon_2_images_updated_at, anon_1.anon_2_images_deleted_at AS anon_1_anon_2_images_deleted_at, anon_1.anon_2_images_deleted AS anon_1_anon_2_images_deleted, anon_1.anon_2_images_id AS anon_1_anon_2_images_id, anon_1.anon_2_images_name AS anon_1_anon_2_images_name, anon_1.anon_2_images_disk_format AS anon_1_anon_2_images_disk_format, anon_1.anon_2_images_container_format AS anon_1_anon_2_images_container_format, anon_1.anon_2_images_size AS anon_1_anon_2_images_size, anon_1.anon_2_images_virtual_size AS anon_1_anon_2_images_virtual_size, anon_1.anon_2_images_status AS anon_1_anon_2_images_status, anon_1.anon_2_images_is_public AS anon_1_anon_2_images_is_public, anon_1.anon_2_images_checksum AS anon_1_anon_2_images_checksum, anon_1.anon_2_images_min_disk AS anon_1_anon_2_images_min_disk, anon_1.anon_2_images_min_ram AS anon_1_anon_2_images_min_ram, anon_1.anon_2_images_owner AS anon_1_anon_2_images_owner, anon_1.anon_2_images_protected AS anon_1_anon_2_images_protected, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status \nFROM (SELECT anon_2.images_created_at AS anon_2_images_created_at, anon_2.images_updated_at AS anon_2_images_updated_at, anon_2.images_deleted_at AS anon_2_images_deleted_at, anon_2.images_deleted AS anon_2_images_deleted, anon_2.images_id AS anon_2_images_id, anon_2.images_name AS anon_2_images_name, anon_2.images_disk_format AS anon_2_images_disk_format, anon_2.images_container_format AS anon_2_images_container_format, anon_2.images_size AS anon_2_images_size, anon_2.images_virtual_size AS anon_2_images_virtual_size, anon_2.images_status AS anon_2_images_status, anon_2.images_is_public AS anon_2_images_is_public, anon_2.images_checksum AS anon_2_images_checksum, anon_2.images_min_disk AS anon_2_images_min_disk, anon_2.images_min_ram AS anon_2_images_min_ram, anon_2.images_owner AS anon_2_images_owner, anon_2.images_protected AS anon_2_images_protected \nFROM (SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.is_public AS images_is_public, images.checksum AS images_checksum, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected \nFROM images \nWHERE images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) AND images.is_public = %s UNION SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.is_public AS images_is_public, images.checksum AS images_checksum, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected \nFROM images \nWHERE images.owner = %s AND images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) UNION SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.is_public AS images_is_public, images.checksum AS images_checksum, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected \nFROM images INNER JOIN image_members ON images.id = image_members.image_id \nWHERE images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) AND image_members.deleted = %s AND image_members.member = %s) AS anon_2 ORDER BY anon_2.images_name ASC, anon_2.images_created_at ASC, anon_2.images_id ASC \n LIMIT %s) AS anon_1 LEFT OUTER JOIN image_properties AS image_properties_1 ON anon_1.anon_2_images_id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON anon_1.anon_2_images_id = image_locations_1.image_id ORDER BY anon_1.anon_2_images_name ASC, anon_1.anon_2_images_created_at ASC, anon_1.anon_2_images_id ASC' (0, 'active', 'saving', 'queued', 'pending_delete', 'deleted', 1, '7c1980078e044cb08250f628cbe73d29', 0, 'active', 'saving', 'queued', 'pending_delete', 'deleted', 0, 'active', 'saving', 'queued', 'pending_delete', 'deleted', 0, '7c1980078e044cb08250f628cbe73d29', 20) Sure, enough, all the rest of the tables are missing from mysql so it complains. Also, I tried this: keystone user-create --name=glance --pass=GLANCE_PASS --tenant=service --email=glance at myco.com exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400) Creating the glance user was easy last time, now it doesn't work... Any insight would be greatly appreciated!! cheers, erich From weiler at soe.ucsc.edu Thu Apr 24 04:03:30 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Wed, 23 Apr 2014 21:03:30 -0700 Subject: [Rdo-list] Glance problems... In-Reply-To: <53584F5F.6000004@soe.ucsc.edu> References: <53584F5F.6000004@soe.ucsc.edu> Message-ID: <53588D12.4090500@soe.ucsc.edu> Actually, maybe my problem is related to the following: https://bugzilla.redhat.com/show_bug.cgi?id=1089611 If so - is there any way to manually create the tables in the meantime until this bug is fixed? Thanks, erich On 4/23/14, 4:40 PM, Erich Weiler wrote: > Hi Y'all, > > I was able to set up RDO Openstack just fine with Icehouse RC1, and then > I wiped it out and am trying again with the official stable release > (2014.1) and am having weird problems. It seems there were many changes > between this and RC1 unless I'm mistaken. > > The main issue I'm having now is that I can't seem to create the glance > database properly, and I was able to do this before no problem. I do: > > $ mysql -u root -p > mysql> CREATE DATABASE glance; > mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ > IDENTIFIED BY 'GLANCE_DBPASS'; > mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ > IDENTIFIED BY 'GLANCE_DBPASS'; > > (Obviously 'GLANCE_DBPASS' is replaced with the real password). > > Then: > > su -s /bin/sh -c "glance-manage db_sync" glance > > And it creates the 'glance' database and only one table, > "migrate_version". I can't get it to create the rest of the tables it > needs. I've tried also: > > openstack-db --init --service glance --password GLANCE_DBPASS > > And that returned success but in reality nothing happened... Any idea > what's going on? > > In the api.conf and registry.conf the correct database credentials are > listed, and I can connect to the database as the mysql glance user on > the command line just fine using those credentials. > > When I run any glance commands I get this in the registry log: > > ProgrammingError: (ProgrammingError) (1146, "Table 'glance.images' > doesn't exist") 'SELECT anon_1.anon_2_images_created_at AS > anon_1_anon_2_images_created_at, anon_1.anon_2_images_updated_at AS > anon_1_anon_2_images_updated_at, anon_1.anon_2_images_deleted_at AS > anon_1_anon_2_images_deleted_at, anon_1.anon_2_images_deleted AS > anon_1_anon_2_images_deleted, anon_1.anon_2_images_id AS > anon_1_anon_2_images_id, anon_1.anon_2_images_name AS > anon_1_anon_2_images_name, anon_1.anon_2_images_disk_format AS > anon_1_anon_2_images_disk_format, anon_1.anon_2_images_container_format > AS anon_1_anon_2_images_container_format, anon_1.anon_2_images_size AS > anon_1_anon_2_images_size, anon_1.anon_2_images_virtual_size AS > anon_1_anon_2_images_virtual_size, anon_1.anon_2_images_status AS > anon_1_anon_2_images_status, anon_1.anon_2_images_is_public AS > anon_1_anon_2_images_is_public, anon_1.anon_2_images_checksum AS > anon_1_anon_2_images_checksum, anon_1.anon_2_images_min_disk AS > anon_1_anon_2_images_min_disk, anon_1.anon_2_images_min_ram AS > anon_1_anon_2_images_min_ram, anon_1.anon_2_images_owner AS > anon_1_anon_2_images_owner, anon_1.anon_2_images_protected AS > anon_1_anon_2_images_protected, image_properties_1.created_at AS > image_properties_1_created_at, image_properties_1.updated_at AS > image_properties_1_updated_at, image_properties_1.deleted_at AS > image_properties_1_deleted_at, image_properties_1.deleted AS > image_properties_1_deleted, image_properties_1.id AS > image_properties_1_id, image_properties_1.image_id AS > image_properties_1_image_id, image_properties_1.name AS > image_properties_1_name, image_properties_1.value AS > image_properties_1_value, image_locations_1.created_at AS > image_locations_1_created_at, image_locations_1.updated_at AS > image_locations_1_updated_at, image_locations_1.deleted_at AS > image_locations_1_deleted_at, image_locations_1.deleted AS > image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, > image_locations_1.image_id AS image_locations_1_image_id, > image_locations_1.value AS image_locations_1_value, > image_locations_1.meta_data AS image_locations_1_meta_data, > image_locations_1.status AS image_locations_1_status \nFROM (SELECT > anon_2.images_created_at AS anon_2_images_created_at, > anon_2.images_updated_at AS anon_2_images_updated_at, > anon_2.images_deleted_at AS anon_2_images_deleted_at, > anon_2.images_deleted AS anon_2_images_deleted, anon_2.images_id AS > anon_2_images_id, anon_2.images_name AS anon_2_images_name, > anon_2.images_disk_format AS anon_2_images_disk_format, > anon_2.images_container_format AS anon_2_images_container_format, > anon_2.images_size AS anon_2_images_size, anon_2.images_virtual_size AS > anon_2_images_virtual_size, anon_2.images_status AS > anon_2_images_status, anon_2.images_is_public AS > anon_2_images_is_public, anon_2.images_checksum AS > anon_2_images_checksum, anon_2.images_min_disk AS > anon_2_images_min_disk, anon_2.images_min_ram AS anon_2_images_min_ram, > anon_2.images_owner AS anon_2_images_owner, anon_2.images_protected AS > anon_2_images_protected \nFROM (SELECT images.created_at AS > images_created_at, images.updated_at AS images_updated_at, > images.deleted_at AS images_deleted_at, images.deleted AS > images_deleted, images.id AS images_id, images.name AS images_name, > images.disk_format AS images_disk_format, images.container_format AS > images_container_format, images.size AS images_size, images.virtual_size > AS images_virtual_size, images.status AS images_status, images.is_public > AS images_is_public, images.checksum AS images_checksum, images.min_disk > AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS > images_owner, images.protected AS images_protected \nFROM images \nWHERE > images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) AND > images.is_public = %s UNION SELECT images.created_at AS > images_created_at, images.updated_at AS images_updated_at, > images.deleted_at AS images_deleted_at, images.deleted AS > images_deleted, images.id AS images_id, images.name AS images_name, > images.disk_format AS images_disk_format, images.container_format AS > images_container_format, images.size AS images_size, images.virtual_size > AS images_virtual_size, images.status AS images_status, images.is_public > AS images_is_public, images.checksum AS images_checksum, images.min_disk > AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS > images_owner, images.protected AS images_protected \nFROM images \nWHERE > images.owner = %s AND images.deleted = %s AND images.status IN (%s, %s, > %s, %s, %s) UNION SELECT images.created_at AS images_created_at, > images.updated_at AS images_updated_at, images.deleted_at AS > images_deleted_at, images.deleted AS images_deleted, images.id AS > images_id, images.name AS images_name, images.disk_format AS > images_disk_format, images.container_format AS images_container_format, > images.size AS images_size, images.virtual_size AS images_virtual_size, > images.status AS images_status, images.is_public AS images_is_public, > images.checksum AS images_checksum, images.min_disk AS images_min_disk, > images.min_ram AS images_min_ram, images.owner AS images_owner, > images.protected AS images_protected \nFROM images INNER JOIN > image_members ON images.id = image_members.image_id \nWHERE > images.deleted = %s AND images.status IN (%s, %s, %s, %s, %s) AND > image_members.deleted = %s AND image_members.member = %s) AS anon_2 > ORDER BY anon_2.images_name ASC, anon_2.images_created_at ASC, > anon_2.images_id ASC \n LIMIT %s) AS anon_1 LEFT OUTER JOIN > image_properties AS image_properties_1 ON anon_1.anon_2_images_id = > image_properties_1.image_id LEFT OUTER JOIN image_locations AS > image_locations_1 ON anon_1.anon_2_images_id = > image_locations_1.image_id ORDER BY anon_1.anon_2_images_name ASC, > anon_1.anon_2_images_created_at ASC, anon_1.anon_2_images_id ASC' (0, > 'active', 'saving', 'queued', 'pending_delete', 'deleted', 1, > '7c1980078e044cb08250f628cbe73d29', 0, 'active', 'saving', 'queued', > 'pending_delete', 'deleted', 0, 'active', 'saving', 'queued', > 'pending_delete', 'deleted', 0, '7c1980078e044cb08250f628cbe73d29', 20) > > Sure, enough, all the rest of the tables are missing from mysql so it > complains. > > > Also, I tried this: > > keystone user-create --name=glance --pass=GLANCE_PASS --tenant=service > --email=glance at myco.com > exceptions must be old-style classes or derived from BaseException, not > NoneType (HTTP 400) > > Creating the glance user was easy last time, now it doesn't work... Any > insight would be greatly appreciated!! > > cheers, > erich > > From ihrachys at redhat.com Thu Apr 24 07:58:57 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 24 Apr 2014 09:58:57 +0200 Subject: [Rdo-list] Glance problems... In-Reply-To: <53588D12.4090500@soe.ucsc.edu> References: <53584F5F.6000004@soe.ucsc.edu> <53588D12.4090500@soe.ucsc.edu> Message-ID: <5358C441.3010404@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 24/04/14 06:03, Erich Weiler wrote: > Actually, maybe my problem is related to the following: > > https://bugzilla.redhat.com/show_bug.cgi?id=1089611 > > If so - is there any way to manually create the tables in the > meantime until this bug is fixed? > I'm not a glance or db guy, so I can't help you technically. Hence general comments. The bug you refer to is closed with NOTABUG resolution by its reporter. Also the product of the referred bug is 'Re Hat Openstack' (commercially supported product from Red Hat also known as RHOSP, Red Hat Openstack Platform), not 'RDO' (community driven project), and we don't at the moment have RHOSP based on Icehouse released. So I don't think the bug attracts any attention from developers at the moment. I would recommend raising a bug for your issue against 'RDO' product. > Thanks, erich > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTWMRBAAoJEC5aWaUY1u57sVcH/R3sGEZ7hBJCia73iFLOEU/T Cchzjlvr/lYpvV1UqRtJAvctE6qsgZG3Hbx9prmTO5bBEs7cItRlwmdfpbSU2nas 2qhv5whR/+auJMfN/UasXVqlhbxjE6FxxFx2VvUPyd1O2CVAF4XxPdDiIW8etBDJ 3kp7f4lRCJmC/Z1WlkzSThapsgbeq3matF541y4PGkLKRpKV10KXFR5EA3B4i9EY ctQ7ZJy/Pui52LzmqzZB8LcUIHqOfynLde6uaE541rWVe0yVwWgaP4C9cvTg7jOc sPraidYXQ2vvtW9jUb37IIXrK1wPuGDCnZUSoRVLPmfHC8gTcgGTU4z0T3ahdE0= =7fox -----END PGP SIGNATURE----- From pbrady at redhat.com Thu Apr 24 09:20:31 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 24 Apr 2014 10:20:31 +0100 Subject: [Rdo-list] Glance problems... In-Reply-To: <53584F5F.6000004@soe.ucsc.edu> References: <53584F5F.6000004@soe.ucsc.edu> Message-ID: <5358D75F.6040904@redhat.com> On 04/24/2014 12:40 AM, Erich Weiler wrote: > Hi Y'all, > > I was able to set up RDO Openstack just fine with Icehouse RC1, and then I wiped it out and am trying again with the official stable release (2014.1) and am having weird problems. It seems there were many changes between this and RC1 unless I'm mistaken. > > The main issue I'm having now is that I can't seem to create the glance database properly, and I was able to do this before no problem. I do: > > $ mysql -u root -p > mysql> CREATE DATABASE glance; I think the issue is due to a change to enforce utf-8 encoding. Can you try again with the above line changed to: CREATE DATABASE glance DEFAULT CHARACTER SET utf8; > And it creates the 'glance' database and only one table, "migrate_version". I can't get it to create the rest of the tables it needs. I've tried also: > > openstack-db --init --service glance --password GLANCE_DBPASS openstack-db has not been updated yet to enforce the new encoding. That will be happening today at some stage. thanks, P?draig. From mmagr at redhat.com Thu Apr 24 09:23:53 2014 From: mmagr at redhat.com (Martin Magr) Date: Thu, 24 Apr 2014 11:23:53 +0200 Subject: [Rdo-list] [rhos-dev] Puppet 3.5.1 breaks RDO Foreman Installs In-Reply-To: <5356DA62.8060107@redhat.com> References: <20140422132011.GC16391@redhat.com> <5356DA62.8060107@redhat.com> Message-ID: <5358D829.9060008@redhat.com> On 04/22/2014 11:08 PM, P?draig Brady wrote: > On 04/22/2014 02:20 PM, Brad P. Crochet wrote: >> Pupppet Labs released into their repo (http://yum.puppetlabs.com) a >> 3.5.1 version sometime last week. >> This version renders the Foreman install inoperable. Thanks to a catch >> by Crag, it was discovered. >> I have tested both 3.2.4 and 3.4.3 (using yum-plugin-versionlock), and >> it works with those versions. >> >> We currently have in openstack-foreman-installer: >> >> Requires: puppet >= 2.7 >> >> It seems we have a number of options to fix this: >> >> 1) Make the current Astapor codebase compatible with 3.5.1, hopefully >> without breaking current compatibility. >> 2) Require a version <= 3.4.3 >> 3) Remove the puppetlabs repos from rdo-release, and rely on the >> puppet from EPEL/Fedora. >> >> I would say these options are not necessarily mutually exclusive. >> >> This affects both Havana and Icehouse. > We could take different approaches in Havana and Icehouse. > > Havana could add the cap on puppet < 3.5. > This would be best done in the openstack-puppet-modules package > to cater for both foreman and packstack. In case of packstack that will help only for all-in-one installations unfortunately. We will have to also change the puppet install command anyway. > > Icehouse could update to using foreman 1.5 which is compat with the new puppet. > foreman 1.5 is available in the standard locations and due for official release soon. > Note that would involve pulling in ruby193 software collection on el6. > > thanks, > P?draig. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list Regards, Martin From rbowen at redhat.com Thu Apr 24 12:09:41 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 24 Apr 2014 08:09:41 -0400 Subject: [Rdo-list] RDO Icehouse ETA? Message-ID: <5358FF05.2010305@redhat.com> To move the conversation here ... Given the work that's been done this week so far, what's our ETA on an Icehouse RDO? --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From pbrady at redhat.com Thu Apr 24 13:51:24 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 24 Apr 2014 14:51:24 +0100 Subject: [Rdo-list] RDO Icehouse ETA? In-Reply-To: <5358FF05.2010305@redhat.com> References: <5358FF05.2010305@redhat.com> Message-ID: <535916DC.4020301@redhat.com> On 04/24/2014 01:09 PM, Rich Bowen wrote: > To move the conversation here ... > > Given the work that's been done this week so far, what's our ETA on an Icehouse RDO? Were currently collating packages and fixing final provisioning issues. Best case is EOB tomorrow, or if not Mon/Tue next week. thanks, P?draig. From weiler at soe.ucsc.edu Thu Apr 24 14:30:55 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 24 Apr 2014 07:30:55 -0700 Subject: [Rdo-list] Glance problems... In-Reply-To: <5358D75F.6040904@redhat.com> References: <53584F5F.6000004@soe.ucsc.edu> <5358D75F.6040904@redhat.com> Message-ID: <5359201F.7010503@soe.ucsc.edu> > I think the issue is due to a change to enforce utf-8 encoding. > Can you try again with the above line changed to: > > CREATE DATABASE glance DEFAULT CHARACTER SET utf8; > >> And it creates the 'glance' database and only one table, "migrate_version". I can't get it to create the rest of the tables it needs. I've tried also: Awesome! Glad it's something easy. >> openstack-db --init --service glance --password GLANCE_DBPASS > > openstack-db has not been updated yet to enforce the new encoding. > That will be happening today at some stage. Can you send a note to the list when that's done? Also, is the code being updated here when it's released? http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/ Such that the fix will be pushed out with an rpm from the repository today? If so, I'll just resync my local repo when the fix is out. From weiler at soe.ucsc.edu Thu Apr 24 15:06:17 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 24 Apr 2014 08:06:17 -0700 Subject: [Rdo-list] Glance problems... In-Reply-To: <5359201F.7010503@soe.ucsc.edu> References: <53584F5F.6000004@soe.ucsc.edu> <5358D75F.6040904@redhat.com> <5359201F.7010503@soe.ucsc.edu> Message-ID: <53592869.2020605@soe.ucsc.edu> Also - do you think this bugfix will make it into the yum repo today? https://bugzilla.redhat.com/show_bug.cgi?id=1061329 On 4/24/14, 7:30 AM, Erich Weiler wrote: >> I think the issue is due to a change to enforce utf-8 encoding. >> Can you try again with the above line changed to: >> >> CREATE DATABASE glance DEFAULT CHARACTER SET utf8; >> >>> And it creates the 'glance' database and only one table, >>> "migrate_version". I can't get it to create the rest of the tables >>> it needs. I've tried also: > > Awesome! Glad it's something easy. > >>> openstack-db --init --service glance --password GLANCE_DBPASS >> >> openstack-db has not been updated yet to enforce the new encoding. >> That will be happening today at some stage. > > Can you send a note to the list when that's done? > > Also, is the code being updated here when it's released? > > http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/ > > Such that the fix will be pushed out with an rpm from the repository > today? If so, I'll just resync my local repo when the fix is out. From rbowen at redhat.com Thu Apr 24 15:00:15 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 24 Apr 2014 11:00:15 -0400 Subject: [Rdo-list] [OFI] Recap from last week In-Reply-To: <53305498.2080508@redhat.com> References: <53305498.2080508@redhat.com> Message-ID: <535926FF.5050208@redhat.com> On 03/24/2014 11:51 AM, Jason Rist wrote: > A bunch of us got together in Brno, CZ and cranked through some of the > core needs for the OpenStack Foreman Installer, AKA Staypuft! > > Some big progress: > ... > 3.) Big demo that we did on Friday showing our progress. > > If interested, I can post the DynFlow video (ogv) somewhere. > Hey, I don't know if I missed it somewhere, but did you end up posting that video? Better yet, do you have a more recent one? I'd love to see it. Thanks. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Thu Apr 24 16:05:08 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 24 Apr 2014 12:05:08 -0400 Subject: [Rdo-list] What's new in the OpenStack Heat Icehouse release - Hangout next week Message-ID: <53593634.2020300@redhat.com> Next week, Steve Baker of the Heat team will be doing a hangout titled "What's new in the OpenStack Heat Icehouse release". This will be at 5pm Eastern time on April 29th, which is Wednesday, April 30, 2014 at 9AM Steve's time and http://tm3.org/rdoheat in your timezone, and you can register to attend at http://goo.gl/Tzj85j In this hangout Steve Baker will walk through the key new features in the Icehouse release of OpenStack Heat. Please also note that the URL I posted for this earlier this week was incorrect - I had to recreate the Hangout because I did it wrong the first time. The correct URL is http://goo.gl/Tzj85j During the talk, we'll also be hanging out on #rdo-hangout for questions and commentary. See you then! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From pbrady at redhat.com Thu Apr 24 16:54:16 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 24 Apr 2014 17:54:16 +0100 Subject: [Rdo-list] Glance problems... In-Reply-To: <53592869.2020605@soe.ucsc.edu> References: <53584F5F.6000004@soe.ucsc.edu> <5358D75F.6040904@redhat.com> <5359201F.7010503@soe.ucsc.edu> <53592869.2020605@soe.ucsc.edu> Message-ID: <535941B8.3010103@redhat.com> On 04/24/2014 04:06 PM, Erich Weiler wrote: > Also - do you think this bugfix will make it into the yum repo today? > > https://bugzilla.redhat.com/show_bug.cgi?id=1061329 A workaround is to: yum install eventlet, and restart the keystone service to pick up the new version. The repos will reference this new build automatically over the next day or so. > > On 4/24/14, 7:30 AM, Erich Weiler wrote: >>> I think the issue is due to a change to enforce utf-8 encoding. >>> Can you try again with the above line changed to: >>> >>> CREATE DATABASE glance DEFAULT CHARACTER SET utf8; >>> >>>> And it creates the 'glance' database and only one table, >>>> "migrate_version". I can't get it to create the rest of the tables >>>> it needs. I've tried also: >> >> Awesome! Glad it's something easy. >> >>>> openstack-db --init --service glance --password GLANCE_DBPASS >>> >>> openstack-db has not been updated yet to enforce the new encoding. >>> That will be happening today at some stage. >> >> Can you send a note to the list when that's done? You can now get the updated script with: yum install openstack-utils thanks, P?draig. From weiler at soe.ucsc.edu Thu Apr 24 20:25:08 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 24 Apr 2014 13:25:08 -0700 Subject: [Rdo-list] Ceilometer DB? Message-ID: <53597324.50709@soe.ucsc.edu> Hi Y'all, I was going to try and set up Ceilometer using MySQL as a DB backend on my test cluster, but I had read somewhere that MySQL wasn't supported as a database backend with Ceilometer under RDO yet. I just wanted to confirm, is that true? Am I required to use MongoDB? Just checking before I go through a bunch of debugging just to learn that "it doesn't work anyway..." ;) cheers, erich From jrist at redhat.com Thu Apr 24 20:37:46 2014 From: jrist at redhat.com (Jason Rist) Date: Thu, 24 Apr 2014 14:37:46 -0600 Subject: [Rdo-list] Ceilometer DB? In-Reply-To: <53597324.50709@soe.ucsc.edu> References: <53597324.50709@soe.ucsc.edu> Message-ID: <5359761A.5020905@redhat.com> On Thu 24 Apr 2014 02:25:08 PM MDT, Erich Weiler wrote: > Hi Y'all, > > I was going to try and set up Ceilometer using MySQL as a DB backend > on my test cluster, but I had read somewhere that MySQL wasn't > supported as a database backend with Ceilometer under RDO yet. I just > wanted to confirm, is that true? Am I required to use MongoDB? > > Just checking before I go through a bunch of debugging just to learn > that "it doesn't work anyway..." ;) > > cheers, > erich > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list My understanding is that work is being done on SQLAlchemy as an alternative for MongoDB but at this time I don't think it is 'supported'. -J -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. openuc: +1.972.707.6408 mobile: +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen From weiler at soe.ucsc.edu Thu Apr 24 22:01:54 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 24 Apr 2014 15:01:54 -0700 Subject: [Rdo-list] Ceilometer DB? In-Reply-To: <5359761A.5020905@redhat.com> References: <53597324.50709@soe.ucsc.edu> <5359761A.5020905@redhat.com> Message-ID: <535989D2.30202@soe.ucsc.edu> > My understanding is that work is being done on SQLAlchemy as an > alternative for MongoDB but at this time I don't think it is > 'supported'. More of a curiosity - but do you know why folks aren't working on integrating MySQL for ceilometer? I understand MongoDB is better at some tasks than MySQL, but given that many (most?) folks use MySQL for other OpenStack DBs, MongoDB is just "another thing to manage". You know? I'm sure there is a good reason for it, I'm just curious... From jrist at redhat.com Thu Apr 24 22:03:49 2014 From: jrist at redhat.com (Jason Rist) Date: Thu, 24 Apr 2014 16:03:49 -0600 Subject: [Rdo-list] Ceilometer DB? In-Reply-To: <535989D2.30202@soe.ucsc.edu> References: <53597324.50709@soe.ucsc.edu> <5359761A.5020905@redhat.com> <535989D2.30202@soe.ucsc.edu> Message-ID: <53598A45.3070309@redhat.com> On Thu 24 Apr 2014 04:01:54 PM MDT, Erich Weiler wrote: >> My understanding is that work is being done on SQLAlchemy as an >> alternative for MongoDB but at this time I don't think it is >> 'supported'. > > More of a curiosity - but do you know why folks aren't working on > integrating MySQL for ceilometer? I understand MongoDB is better at > some tasks than MySQL, but given that many (most?) folks use MySQL for > other OpenStack DBs, MongoDB is just "another thing to manage". You > know? > > I'm sure there is a good reason for it, I'm just curious... As SQLAlchemy is an Object Relational Mapper (ORM), it allows the integration of many DBs. -Jason -- Jason E. Rist Senior Software Engineer OpenStack Management UI Red Hat, Inc. openuc: +1.972.707.6408 mobile: +1.720.256.3933 Freenode: jrist github/identi.ca: knowncitizen From pmyers at redhat.com Thu Apr 24 22:07:52 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 24 Apr 2014 18:07:52 -0400 Subject: [Rdo-list] Ceilometer DB? In-Reply-To: <53598A45.3070309@redhat.com> References: <53597324.50709@soe.ucsc.edu> <5359761A.5020905@redhat.com> <535989D2.30202@soe.ucsc.edu> <53598A45.3070309@redhat.com> Message-ID: <53598B38.10508@redhat.com> On 04/24/2014 06:03 PM, Jason Rist wrote: > On Thu 24 Apr 2014 04:01:54 PM MDT, Erich Weiler wrote: >>> My understanding is that work is being done on SQLAlchemy as an >>> alternative for MongoDB but at this time I don't think it is >>> 'supported'. >> >> More of a curiosity - but do you know why folks aren't working on >> integrating MySQL for ceilometer? I understand MongoDB is better at >> some tasks than MySQL, but given that many (most?) folks use MySQL for >> other OpenStack DBs, MongoDB is just "another thing to manage". You >> know? >> >> I'm sure there is a good reason for it, I'm just curious... > > As SQLAlchemy is an Object Relational Mapper (ORM), it allows the > integration of many DBs. I believe Eoghan may have some light to shed on this. I know he's told me in the past why the above is the case, I just can't recall from past conversations :) Perry From xzhao at bnl.gov Thu Apr 24 22:15:25 2014 From: xzhao at bnl.gov (Zhao, Xin) Date: Thu, 24 Apr 2014 18:15:25 -0400 Subject: [Rdo-list] Icehouse release Message-ID: <53598CFD.5030202@bnl.gov> Hello, Icehouse is released, I wonder when the RDO packages and documentation will be available for deploying Icehouse on RHEL system ? Thanks, Xin From pmyers at redhat.com Thu Apr 24 22:21:32 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 24 Apr 2014 18:21:32 -0400 Subject: [Rdo-list] Icehouse release In-Reply-To: <53598CFD.5030202@bnl.gov> References: <53598CFD.5030202@bnl.gov> Message-ID: <53598E6C.8060708@redhat.com> On 04/24/2014 06:15 PM, Zhao, Xin wrote: > Hello, > > Icehouse is released, I wonder when the RDO packages and documentation > will be available for deploying Icehouse on RHEL system ? Sent earlier today: https://www.redhat.com/archives/rdo-list/2014-April/msg00080.html Cheers, Perry From Tim.Bell at cern.ch Fri Apr 25 05:25:57 2014 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 25 Apr 2014 05:25:57 +0000 Subject: [Rdo-list] Ceilometer DB? In-Reply-To: <53598B38.10508@redhat.com> References: <53597324.50709@soe.ucsc.edu> <5359761A.5020905@redhat.com> <535989D2.30202@soe.ucsc.edu> <53598A45.3070309@redhat.com> <53598B38.10508@redhat.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D9B5CF31@CERNXCHG41.cern.ch> According to http://docs.openstack.org/developer/ceilometer/install/dbreco.html, there are many database backend choices, mysql included. There is much discussion on the lists for the ceilometer schema. The MySQL implementation is tested in the gate but shows some performance issues there even at small scale. At CERN, we run with MongoDB (and have some scalability issues even then which are being worked on with the community) Tim -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Perry Myers Sent: 25 April 2014 00:08 To: jrist at redhat.com; Erich Weiler; Eoghan Glynn Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Ceilometer DB? On 04/24/2014 06:03 PM, Jason Rist wrote: > On Thu 24 Apr 2014 04:01:54 PM MDT, Erich Weiler wrote: >>> My understanding is that work is being done on SQLAlchemy as an >>> alternative for MongoDB but at this time I don't think it is >>> 'supported'. >> >> More of a curiosity - but do you know why folks aren't working on >> integrating MySQL for ceilometer? I understand MongoDB is better at >> some tasks than MySQL, but given that many (most?) folks use MySQL >> for other OpenStack DBs, MongoDB is just "another thing to manage". >> You know? >> >> I'm sure there is a good reason for it, I'm just curious... > > As SQLAlchemy is an Object Relational Mapper (ORM), it allows the > integration of many DBs. I believe Eoghan may have some light to shed on this. I know he's told me in the past why the above is the case, I just can't recall from past conversations :) Perry _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From weiler at soe.ucsc.edu Fri Apr 25 18:50:41 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Fri, 25 Apr 2014 11:50:41 -0700 Subject: [Rdo-list] Open vSwitch issues.... Message-ID: <535AAE81.7060202@soe.ucsc.edu> Hi Y'all, I recently began rebuilding my OpenStack installation under the latest RDO icehouse release (as of two days ago at least), and everything is almost working, but I'm having issues with Open vSwitch, at least on the compute nodes. I'm use the ML2 plugin and VLAN tenant isolation. I have this in my compute node's /etc/neutron/plugin.ini file ---------- [ovs] bridge_mappings = physnet1:br-eth1 [ml2] type_drivers = vlan tenant_network_types = vlan mechanism_drivers = openvswitch [ml2_type_flat] [ml2_type_vlan] network_vlan_ranges = physnet1:200:209 ---------- My switchports that the nodes connect to are configured as trunks, allowing VLANs 200-209 to flow over them. My network that the VMs should be connecting to is: # neutron net-show cbse-net +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 23028b15-fb12-4a9f-9fba-02f165a52d44 | | name | cbse-net | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 200 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | dd25433a-b21d-475d-91e4-156b00f25047 | | tenant_id | 7c1980078e044cb08250f628cbe73d29 | +---------------------------+--------------------------------------+ # neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047 +------------------+--------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------+ | allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} | | cidr | 10.200.0.0/16 | | dns_nameservers | 121.43.52.1 | | enable_dhcp | True | | gateway_ip | 10.200.0.1 | | host_routes | | | id | dd25433a-b21d-475d-91e4-156b00f25047 | | ip_version | 4 | | name | | | network_id | 23028b15-fb12-4a9f-9fba-02f165a52d44 | | tenant_id | 7c1980078e044cb08250f628cbe73d29 | +------------------+--------------------------------------------------+ So those VMs on that network should send packets that would be tagged with VLAN 200. I launch an instance, then look at the compute node with the instance on it. It doesn't get a DHCP address, so it can't talk to the neutron node with the dnsmasq server running on it. I configure the VM's interface to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0. I have another node set up on VLAN 200 on my switch to test with (10.200.0.50) that is a real bare-metal server. I can't ping my bare-metal server. I see the packets getting to eth1 on my compute node, but stopping there. Then I figure out that the packets are *not being tagged* for VLAN 200 as they leave the compute node!! So the switch is dropping them. As a test I configure the switchport with "native vlan 200", and voila, the ping works. So, Open vSwitch is not getting that it needs to tag the packets for VLAN 200. A little diagnostics on the compute node: ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0, idle_age=966, priority=0 actions=NORMAL Shouldn't that show some VLAN tagging? and a tcpdump on eth1 on the compute node: # tcpdump -e -n -vv -i eth1 | grep -i arp tcpdump: WARNING: eth1: no IPv4 address assigned tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28 11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28 11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28 11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28 11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 10.200.0.30, length 28 That tcpdump also confirms the ARP packets are not being tagged 200 as they leave the physical interface. This worked before when I was testing icehouse RC1, I don't know what changed with Open vSwitch... Anyone have any ideas? Thanks as always for the help!! This list has been very helpful. cheers, erich From weiler at soe.ucsc.edu Fri Apr 25 19:11:55 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Fri, 25 Apr 2014 12:11:55 -0700 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535AAE81.7060202@soe.ucsc.edu> References: <535AAE81.7060202@soe.ucsc.edu> Message-ID: <535AB37B.1000502@soe.ucsc.edu> Actually I appear to have : openstack-neutron-openvswitch-2014.1-10.el6.noarch but there appears to be a newer one out there: openstack-neutron-openvswitch-2014.1-11.el6.noarch.rpm Is there by chance a bug fix in that one? (assuming this is a bug...) On 04/25/14 11:50, Erich Weiler wrote: > Hi Y'all, > > I recently began rebuilding my OpenStack installation under the latest > RDO icehouse release (as of two days ago at least), and everything is > almost working, but I'm having issues with Open vSwitch, at least on the > compute nodes. > > I'm use the ML2 plugin and VLAN tenant isolation. I have this in my > compute node's /etc/neutron/plugin.ini file > > ---------- > [ovs] > bridge_mappings = physnet1:br-eth1 > > [ml2] > type_drivers = vlan > tenant_network_types = vlan > mechanism_drivers = openvswitch > > [ml2_type_flat] > > [ml2_type_vlan] > network_vlan_ranges = physnet1:200:209 > ---------- > > My switchports that the nodes connect to are configured as trunks, > allowing VLANs 200-209 to flow over them. > > My network that the VMs should be connecting to is: > > # neutron net-show cbse-net > +---------------------------+--------------------------------------+ > | Field | Value | > +---------------------------+--------------------------------------+ > | admin_state_up | True | > | id | 23028b15-fb12-4a9f-9fba-02f165a52d44 | > | name | cbse-net | > | provider:network_type | vlan | > | provider:physical_network | physnet1 | > | provider:segmentation_id | 200 | > | router:external | False | > | shared | False | > | status | ACTIVE | > | subnets | dd25433a-b21d-475d-91e4-156b00f25047 | > | tenant_id | 7c1980078e044cb08250f628cbe73d29 | > +---------------------------+--------------------------------------+ > > # neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047 > +------------------+--------------------------------------------------+ > | Field | Value | > +------------------+--------------------------------------------------+ > | allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} | > | cidr | 10.200.0.0/16 | > | dns_nameservers | 121.43.52.1 | > | enable_dhcp | True | > | gateway_ip | 10.200.0.1 | > | host_routes | | > | id | dd25433a-b21d-475d-91e4-156b00f25047 | > | ip_version | 4 | > | name | | > | network_id | 23028b15-fb12-4a9f-9fba-02f165a52d44 | > | tenant_id | 7c1980078e044cb08250f628cbe73d29 | > +------------------+--------------------------------------------------+ > > So those VMs on that network should send packets that would be tagged > with VLAN 200. > > I launch an instance, then look at the compute node with the instance on > it. It doesn't get a DHCP address, so it can't talk to the neutron node > with the dnsmasq server running on it. I configure the VM's interface > to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0. I > have another node set up on VLAN 200 on my switch to test with > (10.200.0.50) that is a real bare-metal server. > > I can't ping my bare-metal server. I see the packets getting to eth1 on > my compute node, but stopping there. Then I figure out that the packets > are *not being tagged* for VLAN 200 as they leave the compute node!! So > the switch is dropping them. As a test I configure the switchport > with "native vlan 200", and voila, the ping works. > > So, Open vSwitch is not getting that it needs to tag the packets for > VLAN 200. A little diagnostics on the compute node: > > ovs-ofctl dump-flows br-int > NXST_FLOW reply (xid=0x4): > cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0, > idle_age=966, priority=0 actions=NORMAL > > Shouldn't that show some VLAN tagging? > > and a tcpdump on eth1 on the compute node: > > # tcpdump -e -n -vv -i eth1 | grep -i arp > tcpdump: WARNING: eth1: no IPv4 address assigned > tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size > 65535 bytes > 11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 > tell 10.200.0.30, length 28 > 11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 > tell 10.200.0.30, length 28 > 11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 > tell 10.200.0.30, length 28 > 11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 > tell 10.200.0.30, length 28 > 11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), > length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 > tell 10.200.0.30, length 28 > > That tcpdump also confirms the ARP packets are not being tagged 200 as > they leave the physical interface. > > This worked before when I was testing icehouse RC1, I don't know what > changed with Open vSwitch... Anyone have any ideas? > > Thanks as always for the help!! This list has been very helpful. > > cheers, > erich From pbrady at redhat.com Fri Apr 25 19:52:52 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Fri, 25 Apr 2014 20:52:52 +0100 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535AB37B.1000502@soe.ucsc.edu> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> Message-ID: <535ABD14.5080903@redhat.com> On 04/25/2014 08:11 PM, Erich Weiler wrote: > Actually I appear to have : > > openstack-neutron-openvswitch-2014.1-10.el6.noarch > > but there appears to be a newer one out there: > > openstack-neutron-openvswitch-2014.1-11.el6.noarch.rpm > > Is there by chance a bug fix in that one? (assuming this is a bug...) That was just a change to depend on a later version of novaclient. Since you're upgrading, it would be work running yum update to ensure you have the latest versions of all packages from the repo. I'll ask around here about ML2 changes. thanks, P?draig. From weiler at soe.ucsc.edu Sun Apr 27 19:27:07 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Sun, 27 Apr 2014 12:27:07 -0700 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535AB37B.1000502@soe.ucsc.edu> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> Message-ID: <535D5A0B.4000009@soe.ucsc.edu> I'm still trying to debug this but having issues.... :( When I start an instance on a compute node, I see this in /var/log/neutron/openvswitch-agent.log: 2014-04-27 12:03:02.009 1958 INFO neutron.agent.securitygroups_rpc [-] Preparing filters for devices set([u'61e2f303-89b2-4b52-bbc1-25d97bb29d76']) 2014-04-27 12:03:02.117 1958 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing VIF ports 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1226, in rpc_loop 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent sync = self.process_network_ports(port_info) 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1069, in process_network_ports 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent port_info.get('updated', set())) 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py", line 247, in setup_port_filters 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.prepare_devices_filter(new_devices) 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py", line 164, in prepare_devices_filter 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.firewall.prepare_port_filter(device) 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next() 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/agent/firewall.py", line 108, in defer_apply 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.filter_defer_apply_off() 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_firewall.py", line 370, in filter_defer_apply_off 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.iptables.defer_apply_off() 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_manager.py", line 353, in defer_apply_off 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self._apply() 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/iptables_manager.py", line 367, in _apply 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent with lockutils.lock(lock_name, utils.SYNCHRONIZED_PREFIX, True): 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__ 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent return self.gen.next() 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 183, in lock 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent raise cfg.RequiredOptError('lock_path') 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError: value required for option: lock_path 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent I can't ping other hosts on the VLAN the VM is supposed to be on (when I configure the VM for a static IP), but I also don't see traffic on the OVS bridges at all. I'm using Icehouse RDO and ML2. I'm using these rmp versions: rpm -qa | grep neutron python-neutronclient-2.3.4-1.el6.noarch openstack-neutron-2014.1-10.el6.noarch openstack-neutron-ml2-2014.1-10.el6.noarch python-neutron-2014.1-10.el6.noarch openstack-neutron-openvswitch-2014.1-10.el6.noarch Does this ring a bell for anyone? This used to work when I was using rc1 a while back, so I'm confused as to why it would change now...? On 4/25/14, 12:11 PM, Erich Weiler wrote: > Actually I appear to have : > > openstack-neutron-openvswitch-2014.1-10.el6.noarch > > but there appears to be a newer one out there: > > openstack-neutron-openvswitch-2014.1-11.el6.noarch.rpm > > Is there by chance a bug fix in that one? (assuming this is a bug...) > > On 04/25/14 11:50, Erich Weiler wrote: >> Hi Y'all, >> >> I recently began rebuilding my OpenStack installation under the latest >> RDO icehouse release (as of two days ago at least), and everything is >> almost working, but I'm having issues with Open vSwitch, at least on the >> compute nodes. >> >> I'm use the ML2 plugin and VLAN tenant isolation. I have this in my >> compute node's /etc/neutron/plugin.ini file >> >> ---------- >> [ovs] >> bridge_mappings = physnet1:br-eth1 >> >> [ml2] >> type_drivers = vlan >> tenant_network_types = vlan >> mechanism_drivers = openvswitch >> >> [ml2_type_flat] >> >> [ml2_type_vlan] >> network_vlan_ranges = physnet1:200:209 >> ---------- >> >> My switchports that the nodes connect to are configured as trunks, >> allowing VLANs 200-209 to flow over them. >> >> My network that the VMs should be connecting to is: >> >> # neutron net-show cbse-net >> +---------------------------+--------------------------------------+ >> | Field | Value | >> +---------------------------+--------------------------------------+ >> | admin_state_up | True | >> | id | 23028b15-fb12-4a9f-9fba-02f165a52d44 | >> | name | cbse-net | >> | provider:network_type | vlan | >> | provider:physical_network | physnet1 | >> | provider:segmentation_id | 200 | >> | router:external | False | >> | shared | False | >> | status | ACTIVE | >> | subnets | dd25433a-b21d-475d-91e4-156b00f25047 | >> | tenant_id | 7c1980078e044cb08250f628cbe73d29 | >> +---------------------------+--------------------------------------+ >> >> # neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047 >> +------------------+--------------------------------------------------+ >> | Field | Value | >> +------------------+--------------------------------------------------+ >> | allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} | >> | cidr | 10.200.0.0/16 | >> | dns_nameservers | 121.43.52.1 | >> | enable_dhcp | True | >> | gateway_ip | 10.200.0.1 | >> | host_routes | | >> | id | dd25433a-b21d-475d-91e4-156b00f25047 | >> | ip_version | 4 | >> | name | | >> | network_id | 23028b15-fb12-4a9f-9fba-02f165a52d44 | >> | tenant_id | 7c1980078e044cb08250f628cbe73d29 | >> +------------------+--------------------------------------------------+ >> >> So those VMs on that network should send packets that would be tagged >> with VLAN 200. >> >> I launch an instance, then look at the compute node with the instance on >> it. It doesn't get a DHCP address, so it can't talk to the neutron node >> with the dnsmasq server running on it. I configure the VM's interface >> to be a static IP on VLAN200, 10.200.0.30, and netmask 255.255.0.0. I >> have another node set up on VLAN 200 on my switch to test with >> (10.200.0.50) that is a real bare-metal server. >> >> I can't ping my bare-metal server. I see the packets getting to eth1 on >> my compute node, but stopping there. Then I figure out that the packets >> are *not being tagged* for VLAN 200 as they leave the compute node!! So >> the switch is dropping them. As a test I configure the switchport >> with "native vlan 200", and voila, the ping works. >> >> So, Open vSwitch is not getting that it needs to tag the packets for >> VLAN 200. A little diagnostics on the compute node: >> >> ovs-ofctl dump-flows br-int >> NXST_FLOW reply (xid=0x4): >> cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0, >> idle_age=966, priority=0 actions=NORMAL >> >> Shouldn't that show some VLAN tagging? >> >> and a tcpdump on eth1 on the compute node: >> >> # tcpdump -e -n -vv -i eth1 | grep -i arp >> tcpdump: WARNING: eth1: no IPv4 address assigned >> tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size >> 65535 bytes >> 11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 >> tell 10.200.0.30, length 28 >> 11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 >> tell 10.200.0.30, length 28 >> 11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 >> tell 10.200.0.30, length 28 >> 11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 >> tell 10.200.0.30, length 28 >> 11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), >> length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 >> tell 10.200.0.30, length 28 >> >> That tcpdump also confirms the ARP packets are not being tagged 200 as >> they leave the physical interface. >> >> This worked before when I was testing icehouse RC1, I don't know what >> changed with Open vSwitch... Anyone have any ideas? >> >> Thanks as always for the help!! This list has been very helpful. >> >> cheers, >> erich From pbrady at redhat.com Sun Apr 27 20:01:27 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Sun, 27 Apr 2014 21:01:27 +0100 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535D5A0B.4000009@soe.ucsc.edu> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> Message-ID: <535D6217.40303@redhat.com> On 04/27/2014 08:27 PM, Erich Weiler wrote: > I'm still trying to debug this but having issues.... :( > > When I start an instance on a compute node, I see this in /var/log/neutron/openvswitch-agent.log: > 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError: value required for option: lock_path Ouch, this looks like a recent regression. Can you please add this line to the [DEFAULT] section in /usr/share/neutron/neutron-dist.conf and restart the neutron services: lock_path = $state_path/lock We'll respin the neutron packages with that reinstated. thanks, P?draig. From pbrady at redhat.com Sun Apr 27 20:42:01 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Sun, 27 Apr 2014 21:42:01 +0100 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535D6217.40303@redhat.com> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> <535D6217.40303@redhat.com> Message-ID: <535D6B99.8070402@redhat.com> On 04/27/2014 09:01 PM, P?draig Brady wrote: > On 04/27/2014 08:27 PM, Erich Weiler wrote: >> I'm still trying to debug this but having issues.... :( >> >> When I start an instance on a compute node, I see this in /var/log/neutron/openvswitch-agent.log: > >> 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError: value required for option: lock_path > > Ouch, this looks like a recent regression. > Can you please add this line to the [DEFAULT] section > in /usr/share/neutron/neutron-dist.conf > and restart the neutron services: > > lock_path = $state_path/lock > > We'll respin the neutron packages with that reinstated. Note the /etc/neutron/neutron.conf file that ships with the latest neutron package you have, should have the above lock_path setting uncommented in the file. So I'm guessing that you had an existing /etc/neutron/neutron.conf file, than wasn't updated when you upgraded to the latest neutron package, and you need to merge in the changes from /etc/neutron/neutron.conf.rpmnew? I.E. The latest neutron package should be consistent on a new install, but could have this issue on upgrade. On a more general note, we should be aiming to provide only commented values in the default /etc/neutron/neutron.conf, with explicit values in /usr/share/neutron/neutron-dist.conf This will keep users from having this particular issue, and also be more consistent with other packages. Also on a slightly related note I see the default /etc/neutron/neutron.conf has signing_dir configured. Ihar wouldn't that trigger http://bugzilla.redhat.com/1050842 thanks, P?draig. From weiler at soe.ucsc.edu Sun Apr 27 21:06:27 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Sun, 27 Apr 2014 14:06:27 -0700 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535D6217.40303@redhat.com> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> <535D6217.40303@redhat.com> Message-ID: <535D7153.9040206@soe.ucsc.edu> Ah yes, that mostly fixed it! The error went away. I still can't fully get connectivity, but I'm a lot closer. I'll debug a bit more. As a side note, when I do "ifconfig" on the compute node I see the following: # ifconfig br-eth1 Link encap:Ethernet HWaddr 00:E0:81:B4:A4:AD inet6 addr: fe80::c0db:61ff:fe57:81f5/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1756 (1.7 KiB) TX bytes:468 (468.0 b) br-int Link encap:Ethernet HWaddr 26:77:6D:23:85:49 inet6 addr: fe80::5c45:c2ff:fe35:11e6/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2064 (2.0 KiB) TX bytes:468 (468.0 b) eth0 Link encap:Ethernet HWaddr 00:E0:81:B4:A4:AC inet addr:10.1.1.143 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::2e0:81ff:feb4:a4ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9450 errors:0 dropped:0 overruns:0 frame:0 TX packets:8896 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2023967 (1.9 MiB) TX bytes:2669456 (2.5 MiB) Interrupt:18 Memory:dc220000-dc240000 eth1 Link encap:Ethernet HWaddr 00:E0:81:B4:A4:AD inet6 addr: fe80::2e0:81ff:feb4:a4ad/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:10418 errors:0 dropped:0 overruns:0 frame:0 TX packets:71 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:707770 (691.1 KiB) TX bytes:4950 (4.8 KiB) Interrupt:19 Memory:dc260000-dc280000 eth1.1 Link encap:Ethernet HWaddr 00:E0:81:B4:A4:AD inet6 addr: fe80::2e0:81ff:feb4:a4ad/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b) eth1.200 Link encap:Ethernet HWaddr 00:E0:81:B4:A4:AD inet6 addr: fe80::2e0:81ff:feb4:a4ad/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:811 errors:0 dropped:0 overruns:0 frame:0 TX packets:55 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:40406 (39.4 KiB) TX bytes:3694 (3.6 KiB) int-br-eth1 Link encap:Ethernet HWaddr CA:51:D4:4D:E9:BF inet6 addr: fe80::c851:d4ff:fe4d:e9bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:42 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2628 (2.5 KiB) TX bytes:4002 (3.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) phy-br-eth1 Link encap:Ethernet HWaddr D6:38:CF:DE:C0:EC inet6 addr: fe80::d438:cfff:fede:c0ec/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:42 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4002 (3.9 KiB) TX bytes:2628 (2.5 KiB) qbr2ecb409c-00 Link encap:Ethernet HWaddr 1A:17:65:D7:70:8C inet6 addr: fe80::4027:8cff:fe1b:21e0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1644 (1.6 KiB) TX bytes:468 (468.0 b) qvb2ecb409c-00 Link encap:Ethernet HWaddr 1A:17:65:D7:70:8C inet6 addr: fe80::1817:65ff:fed7:708c/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:42 errors:0 dropped:0 overruns:0 frame:0 TX packets:55 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2628 (2.5 KiB) TX bytes:3702 (3.6 KiB) qvo2ecb409c-00 Link encap:Ethernet HWaddr 12:58:38:01:C4:FD inet6 addr: fe80::1058:38ff:fe01:c4fd/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:55 errors:0 dropped:0 overruns:0 frame:0 TX packets:42 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3702 (3.6 KiB) TX bytes:2628 (2.5 KiB) tap2ecb409c-00 Link encap:Ethernet HWaddr FE:16:3E:C2:24:6D inet6 addr: fe80::fc16:3eff:fec2:246d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1213 errors:0 dropped:0 overruns:0 frame:0 TX packets:46 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:117410 (114.6 KiB) TX bytes:2916 (2.8 KiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:8A:1C:9D inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Now, before (with rc1), I don't remember seeing the VLAN interfaces (eth1.200 for example). Is that new? On 4/27/14, 1:01 PM, P?draig Brady wrote: > On 04/27/2014 08:27 PM, Erich Weiler wrote: >> I'm still trying to debug this but having issues.... :( >> >> When I start an instance on a compute node, I see this in /var/log/neutron/openvswitch-agent.log: > >> 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError: value required for option: lock_path > > Ouch, this looks like a recent regression. > Can you please add this line to the [DEFAULT] section > in /usr/share/neutron/neutron-dist.conf > and restart the neutron services: > > lock_path = $state_path/lock > > We'll respin the neutron packages with that reinstated. > > thanks, > P?draig. > From weiler at soe.ucsc.edu Sun Apr 27 22:33:18 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Sun, 27 Apr 2014 15:33:18 -0700 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535D6B99.8070402@redhat.com> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> <535D6217.40303@redhat.com> <535D6B99.8070402@redhat.com> Message-ID: <59805D9E-445F-4266-AE5B-7C4833ADCEA1@soe.ucsc.edu> Yup - I didn't merge them, I'll go through and go that! > On Apr 27, 2014, at 1:42 PM, P?draig Brady wrote: > >> On 04/27/2014 09:01 PM, P?draig Brady wrote: >>> On 04/27/2014 08:27 PM, Erich Weiler wrote: >>> I'm still trying to debug this but having issues.... :( >>> >>> When I start an instance on a compute node, I see this in /var/log/neutron/openvswitch-agent.log: >> >>> 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError: value required for option: lock_path >> >> Ouch, this looks like a recent regression. >> Can you please add this line to the [DEFAULT] section >> in /usr/share/neutron/neutron-dist.conf >> and restart the neutron services: >> >> lock_path = $state_path/lock >> >> We'll respin the neutron packages with that reinstated. > > Note the /etc/neutron/neutron.conf file that ships with the latest > neutron package you have, should have the above lock_path setting > uncommented in the file. So I'm guessing that you had > an existing /etc/neutron/neutron.conf file, than wasn't > updated when you upgraded to the latest neutron package, > and you need to merge in the changes from /etc/neutron/neutron.conf.rpmnew? > I.E. The latest neutron package should be consistent on a new install, > but could have this issue on upgrade. > > On a more general note, we should be aiming to provide only commented > values in the default /etc/neutron/neutron.conf, with explicit values > in /usr/share/neutron/neutron-dist.conf > This will keep users from having this particular issue, > and also be more consistent with other packages. > > Also on a slightly related note I see the default /etc/neutron/neutron.conf > has signing_dir configured. Ihar wouldn't that trigger http://bugzilla.redhat.com/1050842 > > thanks, > P?draig. > From rbowen at redhat.com Mon Apr 28 13:33:48 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 28 Apr 2014 09:33:48 -0400 Subject: [Rdo-list] OhioLinuxFest CFP now open Message-ID: <535E58BC.2080600@redhat.com> Ohio LinuxFest call for papers is now open - http://ohiolinux.org/CFP - and they're specifically looking for OpenStack talks this year. If any of you are in the Columbus area, or within a few hours' drive, it's well worth going to. It's a really fun event with a passionate crowd. I don't think I'm going to be able to make it up there this year, but have driven up for about half of the events since they got started. Always a good time. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Mon Apr 28 14:06:25 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 28 Apr 2014 10:06:25 -0400 Subject: [Rdo-list] What's new in Heat - Hangout tomorrow Message-ID: <535E6061.2070204@redhat.com> Tomorrow Steve Baker will be presenting a Google Hangout on what's new in the Icehouse release of OpenStack Heat - https://plus.google.com/u/1/events/ckhqrki6iepg12vkqk5vnt7ijd0 . This will be at 5pm Eastern US time, 2pm Pacific time. If this time isn't convenient for you, note that the video of the event will remain available at that same URL after the event. Join us on the #rdo-hangout IRC channel during the event for questions and discussion. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From weiler at soe.ucsc.edu Mon Apr 28 20:05:58 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Mon, 28 Apr 2014 13:05:58 -0700 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <59805D9E-445F-4266-AE5B-7C4833ADCEA1@soe.ucsc.edu> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> <535D6217.40303@redhat.com> <535D6B99.8070402@redhat.com> <59805D9E-445F-4266-AE5B-7C4833ADCEA1@soe.ucsc.edu> Message-ID: <535EB4A6.3070005@soe.ucsc.edu> Neutron is all working now, thanks for the tip!! On 04/27/14 15:33, Erich Weiler wrote: > Yup - I didn't merge them, I'll go through and go that! > > > >> On Apr 27, 2014, at 1:42 PM, P?draig Brady wrote: >> >>> On 04/27/2014 09:01 PM, P?draig Brady wrote: >>>> On 04/27/2014 08:27 PM, Erich Weiler wrote: >>>> I'm still trying to debug this but having issues.... :( >>>> >>>> When I start an instance on a compute node, I see this in /var/log/neutron/openvswitch-agent.log: >>> >>>> 2014-04-27 12:03:02.117 1958 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent RequiredOptError: value required for option: lock_path >>> >>> Ouch, this looks like a recent regression. >>> Can you please add this line to the [DEFAULT] section >>> in /usr/share/neutron/neutron-dist.conf >>> and restart the neutron services: >>> >>> lock_path = $state_path/lock >>> >>> We'll respin the neutron packages with that reinstated. >> >> Note the /etc/neutron/neutron.conf file that ships with the latest >> neutron package you have, should have the above lock_path setting >> uncommented in the file. So I'm guessing that you had >> an existing /etc/neutron/neutron.conf file, than wasn't >> updated when you upgraded to the latest neutron package, >> and you need to merge in the changes from /etc/neutron/neutron.conf.rpmnew? >> I.E. The latest neutron package should be consistent on a new install, >> but could have this issue on upgrade. >> >> On a more general note, we should be aiming to provide only commented >> values in the default /etc/neutron/neutron.conf, with explicit values >> in /usr/share/neutron/neutron-dist.conf >> This will keep users from having this particular issue, >> and also be more consistent with other packages. >> >> Also on a slightly related note I see the default /etc/neutron/neutron.conf >> has signing_dir configured. Ihar wouldn't that trigger http://bugzilla.redhat.com/1050842 >> >> thanks, >> P?draig. >> From pbrady at redhat.com Tue Apr 29 12:56:33 2014 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Tue, 29 Apr 2014 13:56:33 +0100 Subject: [Rdo-list] [package announce] Icehouse GA Message-ID: <535FA181.3070306@redhat.com> The full Icehouse package set is now available in the RDO repos, for el6, el7 and Fedora 20 distros and derivatives. Instructions to get started with these repos are at: http://openstack.redhat.com/QuickStart In this release we have: openstack-ceilometer-2014.1 openstack-cinder-2014.1 openstack-glance-2014.1 openstack-heat-2014.1 openstack-keystone-2014.1 openstack-neutron-2014.1 openstack-nova-2014.1 openstack-sahara-2014.1 openstack-trove-2014.1 openstack-utils-2014.1 python-django-horizon-2014.1 python-django-sahara-2014.1 also this set of client packages: python-ceilometerclient-1.0.8-1 python-cinderclient-1.0.8-1 python-glanceclient-0.12.0-1 python-heatclient-0.2.9-1 python-keystoneclient-0.8.0-1 python-neutronclient-2.3.4-1 python-novaclient-2.17.0-1 python-openstackclient-0.3.1-1 python-saharaclient-0.7.0-1 python-swiftclient-2.0.3-1 python-troveclient-1.0.3-3 In the Fedora 20 repo (initially) we also have these newer incubated projects: openstack-tuskar-0.3.0 openstack-tuskar-ui-0.1.0 python-tuskarclient-0.1.4 openstack-tripleo-0.0.2 openstack-ironic-2014.1 python-ironicclient-0.1.2 thanks, P?draig. From rbowen at rcbowen.com Tue Apr 29 13:59:10 2014 From: rbowen at rcbowen.com (Rich Bowen) Date: Tue, 29 Apr 2014 09:59:10 -0400 Subject: [Rdo-list] Community sync meeting, April 29 Message-ID: <535FB02E.40003@rcbowen.com> We had a very quick community sync meeting this morning on the #RDO IRC channel on Freenode. I didn't do the minutes correctly, so here's a summary. (Full log at http://meetbot.fedoraproject.org/rdo/2014-04-29/rdo_community_irc_meeting.2014-04-29-13.07.log.html ) * Icehouse We have the RDO packages this morning, which is awesome: https://www.redhat.com/archives/rdo-list/2014-April/msg00105.html mburned mentioned that we would have live images by Summit, but we won't have any physical media to hand out for that. But we will have the bookmarks, which link to the QuickStart, and we can put the image there. I note that someone has already updated the QuickStart page to point to Icehouse - thanks for that. * Hangout We have a Heat hangout today. Steve Baker is doing one first thing in the morning tomorrow, which is 5pm my time today. https://plus.google.com/u/1/events/ckhqrki6iepg12vkqk5vnt7ijd0 And I talked with Hugh about doing one on TripleO and StayPuft next month, and we decided that we really want to focus on TripleO. So some time late May we'll do that. Date to be announced soon. * Newsletter The May newsletter is almost ready to go out. If you have anything you'd like to get in it, please contact me asap *OpenStack Summit Lots of us will be at the Summit. Hopefully I'll have RDO polos for everyone that needs one. If you're going to be there and haven't requested an RDO polo, please let me know as soon as you can, since that parcel is going out pretty soon. I'm hoping this time to do a better job of collecting user stories. I'll have my portable mic this time, and hopefully get a few recorded. If you hear any cool user stories at Summit, please send them my way so that I can get them recorded. I'd also like to do more of the engineer interviews I'm always intending to do. With so many of us there I should be able to get a few anyways. And then the next week, I'm going to be at LinuxCon tokyo, and red_trela will be there too, I believe. * Bug Triage Been chugging along at it slowly. Current stats as of today: http://kashyapc.fedorapeople.org/virt/openstack/bugzilla/rdo-bug-status/all-rdo-bugs-29-04-2014.txt The above URL has bi-weekly stats. And, for Grizzly RDO bugs, we'd follow Fedora EOL style approach, i.e. We'd request on Bugzilla to try w/ latest RDO IceHouse bits and re-open if the bug still persists To borrow pixelb's wording: Once N+2 is released (Icehouse), we EOL N (Grizzly). Maybe we can organize a community bug triage days after the next test day or earlier depending on bug stream * ask.openstack.org We're doing awesome with the ask.openstack site so far as keeping up with questions. Last night's count was 19 which is the lowest its ever been. I've started looking at other keywords, too, and we're doing pretty well there, too. * CentOS Cloud Sig MBurns pinged on it a week or 2 ago, will followup again this week Didn't hear anything back * Social Media Rikki Endsley has been pushing the @RDOCommunity Twitter recently, and we almost doubled our followers in the last two weeks. (431 followers this morning) There is also a Google Plus group that we have recently started paying more attention to, which is at http://tm3.org/rdogplus * End Meeting -- Rich Bowen - rbowen at rcbowen.com - @rbowen http://apachecon.com/ - @apachecon From ramon at linux-labs.net Tue Apr 29 16:14:59 2014 From: ramon at linux-labs.net (Ramon Acedo) Date: Tue, 29 Apr 2014 17:14:59 +0100 Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) Message-ID: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> Hi all, I have been trying to test the OpenStack Foreman Installer with different combinations of Foreman versions and of the installer itself (and even different versions of Puppet) with no success so far. I know that Packstack alone works but I want to go all the way with multiple hosts and bare metal provisioning to eventually use it for large deployments and scale out Nova Compute and other services seamlessly. The error I get when running the foreman_server.sh script is always: -------------- rake aborted! undefined method `[]' for nil:NilClass Tasks: TOP => db:seed (See full trace by running task with --trace) -------------- After that, if Foreman starts, there?s nothing in the "Host groups" section which is supposed to be prepopulated by the foreman_server.sh script (as described in http://red.ht/1jdJ03q). The process I follow is very simple: 1. Install a clean RHEL 6.5 or CentOS 6.5 2. Enable EPEL 3. Enable the rdo-release repo: a. rdo-release-havana-7: Foreman 1.3 and openstack-foreman-installer 1.0.6 b. rdo-release-havana-8: Foreman 1.5 and openstack-foreman-installer 1.0.6 c. rdo-release-icehouse-3: Foreman 1.5 and openstack-foreman-installer 2.0 (as a note here, the SCL repo needs to be enabled before the next step too). 4. Install openstack-foreman-installer 5. Create and export the needed variables: export PROVISIONING_INTERFACE=eth0 export FOREMAN_GATEWAY=192.168.5.100 export FOREMAN_PROVISIONING=true 6. Run the script foreman_server.sh from /usr/share/openstack-foreman-installer/bin For 3a and 3b I also tried with an older version of Puppet (3.2) with the same result. These are the full outputs: 3a: http://fpaste.org/97739/ (Havana and Foreman 1.3) 3b: http://fpaste.org/97760/ (Havana and Foreman 1.3 with Puppet 3.2) 3c: http://fpaste.org/97838/ (Icehouse and Foreman 1.5) I?m sure somebody in the list has tried to deploy and configure Foreman for bare metal installations (DHCP+PXE) and the documentation and the foreman_server.sh script suggest it should be possible in a fairly easy way. I filled a bug as it might well be one, pending confirmation: https://bugzilla.redhat.com/show_bug.cgi?id=1092443 Any help is really appreciated! Many thanks. Ramon From rich.minton at lmco.com Tue Apr 29 18:24:29 2014 From: rich.minton at lmco.com (Minton, Rich) Date: Tue, 29 Apr 2014 18:24:29 +0000 Subject: [Rdo-list] Cannot log into Foreman. Message-ID: I?m not having much luck getting logged into the Foreman Web UI? After installing Foreman using the procedures on the RDO website I cannot login as admin using the default password. I constantly get ?Incorrect username or password?. The Foreman version is 1.5.0-RC2 The ?production.log? in /var/log/foreman contains this line after I attempt to log in. Started POST "/users/login" for 10.0.64.100 at 2014-04-29 14:15:03 -0400 Processing by UsersController#login as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"Cv76I6hPvlIYcC71Lnfnsp2JBVZHZQqNII5BbNs+PHI=", "login"=>{"login"=>"admin", "password"=>"[FILTERED]"}, "commit"=>"Login"} invalid user Redirected to https://foreman-test-1/users/login Completed 302 Found in 238ms (ActiveRecord: 8.3ms) I also get this a lot? not sure if it is related. Started GET "/node/foreman-test-1.umtd-du.lmdit.us.lmco.com?format=yml" for 10.0.64.100 at 2014-04-29 14:04:09 -0400 Processing by HostsController#externalNodes as YML Parameters: {"name"=>"foreman-test-1.umtd-du.lmdit.us.lmco.com"} No smart proxy server found on ["foreman-test-1.umtd-du.lmdit.us.lmco.com"] and is not in trusted_puppetmaster_hosts Redirected to https://foreman-test-1.umtd-du.lmdit.us.lmco.com/users/login Filter chain halted as :require_puppetmaster_or_login rendered or redirected Completed 403 Forbidden in 5ms (ActiveRecord: 0.7ms) All ideas are welcome. Thank you, Rick Richard Minton Lockheed Martin - D&IS LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Apr 29 20:41:24 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 29 Apr 2014 16:41:24 -0400 Subject: [Rdo-list] RHEL 7 rc cloud guest image available Message-ID: <53600E74.10205@redhat.com> FYI - RHEL 7 guest image available: ftp://ftp.redhat.com/redhat/rhel/rc/7/GuestImage/ --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From ramon at linux-labs.net Wed Apr 30 12:03:11 2014 From: ramon at linux-labs.net (Ramon Acedo) Date: Wed, 30 Apr 2014 13:03:11 +0100 Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> Message-ID: <45F36FED-983A-4CA0-8D8D-5E7325716B2C@linux-labs.net> On 29 Apr 2014, at 17:14, Ramon Acedo wrote: > Hi all, > > I have been trying to test the OpenStack Foreman Installer with different combinations of Foreman versions and of the installer itself (and even different versions of Puppet) with no success so far. > > I know that Packstack alone works but I want to go all the way with multiple hosts and bare metal provisioning to eventually use it for large deployments and scale out Nova Compute and other services seamlessly. > > The error I get when running the foreman_server.sh script is always: > -------------- > rake aborted! > undefined method `[]' for nil:NilClass > > Tasks: TOP => db:seed > (See full trace by running task with --trace) > -------------- > > After that, if Foreman starts, there?s nothing in the "Host groups" section which is supposed to be prepopulated by the foreman_server.sh script (as described in http://red.ht/1jdJ03q). > > The process I follow is very simple: > > 1. Install a clean RHEL 6.5 or CentOS 6.5 > > 2. Enable EPEL > > 3. Enable the rdo-release repo: > > a. rdo-release-havana-7: Foreman 1.3 and openstack-foreman-installer 1.0.6 > b. rdo-release-havana-8: Foreman 1.5 and openstack-foreman-installer 1.0.6 > c. rdo-release-icehouse-3: Foreman 1.5 and openstack-foreman-installer 2.0 (as a note here, the SCL repo needs to be enabled before the next step too). > > 4. Install openstack-foreman-installer > > 5. Create and export the needed variables: > > export PROVISIONING_INTERFACE=eth0 > export FOREMAN_GATEWAY=192.168.5.100 > export FOREMAN_PROVISIONING=true After setting FOREMAN_PROVISIONING=false and running foreman_server.sh with the default versions of Puppet (3.5) and Foreman (1.5) everything installed and the "Host groups" in the "Configure" section are now populated. I?d like to use provisioning (DHCP, DNS and PXE) and everything suggests it should be possible so it looks like a bug. > > 6. Run the script foreman_server.sh from /usr/share/openstack-foreman-installer/bin > > For 3a and 3b I also tried with an older version of Puppet (3.2) with the same result. > > These are the full outputs: > > 3a: http://fpaste.org/97739/ (Havana and Foreman 1.3) > 3b: http://fpaste.org/97760/ (Havana and Foreman 1.3 with Puppet 3.2) > 3c: http://fpaste.org/97838/ (Icehouse and Foreman 1.5) > > I?m sure somebody in the list has tried to deploy and configure Foreman for bare metal installations (DHCP+PXE) and the documentation and the foreman_server.sh script suggest it should be possible in a fairly easy way. > > I filled a bug as it might well be one, pending confirmation: https://bugzilla.redhat.com/show_bug.cgi?id=1092443 > > Any help is really appreciated! > > Many thanks. > > Ramon > > From geert.jansen at ravellosystems.com Wed Apr 30 15:41:32 2014 From: geert.jansen at ravellosystems.com (Geert Jansen) Date: Wed, 30 Apr 2014 17:41:32 +0200 Subject: [Rdo-list] Running RDO IceHouse on EC2/Google Message-ID: Hi, I thought people might find this interesting. I wrote up a blog post on how you can run a multi-node OpenStack IceHouse setup on EC2 or GCE. This includes nested virtualization so your instances run full speed (private beta atm. Note: no QEmu in emulation!) and private tenant networks using VLANs. This could be useful for trying out OpenStack, or for quickly launching new installs for development and test. When an install is done the entire env. can be snapshotted. http://www.ravellosystems.com/blog/multi-node-openstack-rdo-icehouse-aws-ec2-google/ Feedback is welcome. Regards, Geert Jansen From John_Ingle at dell.com Wed Apr 30 22:05:03 2014 From: John_Ingle at dell.com (John_Ingle at dell.com) Date: Wed, 30 Apr 2014 22:05:03 +0000 Subject: [Rdo-list] unsubscribe Message-ID: <8B2520B71A0C214FBAD55D0AD5B803760C9CC51F@AUSX10HMPS305.AMER.DELL.COM> Dell - Internal Use - Confidential John Ingle Onsite Systems Engineer Dell | Enterprise Solutions Group Cell - +1 512-431-8567, Office - +1 512-728-5452 -------------- next part -------------- An HTML attachment was scrubbed... URL: