From rdo-info at redhat.com Sun Dec 1 18:01:44 2013 From: rdo-info at redhat.com (RDO Forum) Date: Sun, 1 Dec 2013 18:01:44 +0000 Subject: [Rdo-list] [RDO] RDO Blocker bugs Message-ID: <00000142af52be51-29c43cbb-2474-4ba1-bd88-105418000eb5-000000@email.amazonses.com> marafa started a discussion. RDO Blocker bugs --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/955/rdo-blocker-bugs Have a great day! From rdo-info at redhat.com Mon Dec 2 01:59:23 2013 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 2 Dec 2013 01:59:23 +0000 Subject: [Rdo-list] [RDO] Packstack + Seperate MySQL host Message-ID: <00000142b10808a7-5b837995-d4e2-4d6d-aaa5-882d40c3daa3-000000@email.amazonses.com> lrr started a discussion. Packstack + Seperate MySQL host --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/956/packstack-seperate-mysql-host Have a great day! From rdo-info at redhat.com Wed Dec 4 09:37:13 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 4 Dec 2013 09:37:13 +0000 Subject: [Rdo-list] [RDO] Fully qualified domain Message-ID: <00000142bcf7ece6-e047b20f-25fa-4000-8196-c2621213379c-000000@email.amazonses.com> sib started a discussion. Fully qualified domain --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/957/fully-qualified-domain Have a great day! From rdo-info at redhat.com Wed Dec 4 13:40:49 2013 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 4 Dec 2013 13:40:49 +0000 Subject: [Rdo-list] [RDO] packstack breaks entire physical network Message-ID: <00000142bdd6efc4-e0eab1a7-ccb4-46b4-94bc-d36c41bc30a4-000000@email.amazonses.com> marafa started a discussion. packstack breaks entire physical network --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/958/packstack-breaks-entire-physical-network Have a great day! From pbrady at redhat.com Thu Dec 5 16:45:44 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 05 Dec 2013 16:45:44 +0000 Subject: [Rdo-list] [package announce] openstack-nova security update Message-ID: <52A0ADB8.1080209@redhat.com> Grizzly and Havana RDO openstack-nova packages have been updated to openstack-nova-2013.1.4-4 and openstack-nova-2013.2-5 respectively. - Fix compute host disk space DoS when booting oversized images https://access.redhat.com/security/cve/CVE-2013-2096 https://access.redhat.com/security/cve/CVE-2013-4463 Other changes included in these respective versions are: openstack-nova-2013.1.4-4 - remove the -s option on qemu-img convert http://bugzilla.redhat.com/1016896 openstack-nova-2013.2-5 - Remove cert and scheduler hard dependency on cinderclient http://bugzilla.redhat.com/1031679 - Require ipmitool for baremetal driver http://bugzilla.redhat.com/1022243 From pbrady at redhat.com Thu Dec 5 16:49:22 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 05 Dec 2013 16:49:22 +0000 Subject: [Rdo-list] [package announce] openstack-packstack update Message-ID: <52A0AE92.7030000@redhat.com> Havana RDO packstack has been updated as follows. openstack-packstack-2013.2.1-0.17.dev876 Add information on location of horizon password (rhbz#1002326) Make network_vlan_ranges available in GRE setups (rhbz#1006534) Include the host's FQDN in Horizon's ALLOWED_HOSTS (rhbz#1028678) Improve error reporting for shell commands (rhbz#1031786) Fixed comments for interactive installation (rhbz#1030767) Replace qpid_host with qpid_hostname (lp#1242715) Make sure iptables are enabled (rhbz#1023955) Align packstack templates with ceilometer upstream git repo (rhbz#1022189) Add missing options to packstack man page (rhbz#1032103) Disable unsupported features in Horizon for RHOS (rhbz#1035651) From matt at redhat.com Sat Dec 7 14:33:17 2013 From: matt at redhat.com (Matthew Farrellee) Date: Sat, 07 Dec 2013 09:33:17 -0500 Subject: [Rdo-list] Feedback - it isn't obvious to everyone that RDO works on CentOS Message-ID: <52A331AD.9050408@redhat.com> This is just an FYI, do with it what you will. In the Savanna community, I've had to field the question "Can I use RDO w/o a Red Hat license?" a handful of times now. The answer is obvious to me and likely everyone else on the rdo-list, but apparently it isn't to everyone. BTW, it looks like google "openstack + (fedora|centos|scientific linux)" returns an ad for RDO, so +1 there. Best, matt From pmyers at redhat.com Sat Dec 7 15:49:14 2013 From: pmyers at redhat.com (Perry Myers) Date: Sat, 07 Dec 2013 10:49:14 -0500 Subject: [Rdo-list] Feedback - it isn't obvious to everyone that RDO works on CentOS In-Reply-To: <52A331AD.9050408@redhat.com> References: <52A331AD.9050408@redhat.com> Message-ID: <52A3437A.20109@redhat.com> On 12/07/2013 09:33 AM, Matthew Farrellee wrote: > This is just an FYI, do with it what you will. > > In the Savanna community, I've had to field the question "Can I use RDO > w/o a Red Hat license?" a handful of times now. > > The answer is obvious to me and likely everyone else on the rdo-list, > but apparently it isn't to everyone. Thanks Matt for the feedback :) One thing I noticed is that http://openstack.redhat.com/Main_Page says: "RDO is a community of people using and deploying OpenStack on Red Hat and Red Hat-based platforms." That seems a little ambiguous to me. Perhaps rephrasing as: "RDO is a community of people using and deploying OpenStack on Red Hat Enterprise Linux, Fedora and distributions derived from these (such as CentOS, Scientific Linux and others)." I think that makes it much more clear. Rich, what do you think? I think in general using the phrase "Red Hat based platforms" (which is used repeatedly) might not be the best phrasing. Since that doesn't make it clear if we're talking about RHEL and it's clones or something else that is more Red Hat specific. > BTW, it looks like google "openstack + (fedora|centos|scientific linux)" > returns an ad for RDO, so +1 there. Cool :) From rbowen at redhat.com Mon Dec 9 14:19:29 2013 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 09 Dec 2013 09:19:29 -0500 Subject: [Rdo-list] Feedback - it isn't obvious to everyone that RDO works on CentOS In-Reply-To: <52A3437A.20109@redhat.com> References: <52A331AD.9050408@redhat.com> <52A3437A.20109@redhat.com> Message-ID: <52A5D171.90804@redhat.com> On 12/07/2013 10:49 AM, Perry Myers wrote: > One thing I noticed is thathttp://openstack.redhat.com/Main_Page says: > > "RDO is a community of people using and deploying OpenStack on Red Hat > and Red Hat-based platforms." > > That seems a little ambiguous to me. Perhaps rephrasing as: > > "RDO is a community of people using and deploying OpenStack on Red Hat > Enterprise Linux, Fedora and distributions derived from these (such as > CentOS, Scientific Linux and others)." > > I think that makes it much more clear. Rich, what do you think? +1 Unfortunately, that text is deep enough on the page that it's kind of overwhelmed by the top "... on the industry's most trusted Linux platform ..." which, given the context, clearly means RHEL, right? I'll see how we can tweak that in the available space, and update the text beneath. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Tue Dec 10 17:20:42 2013 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 10 Dec 2013 12:20:42 -0500 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter: December 2013 Message-ID: <52A74D6A.9000201@redhat.com> Thanks for being a part of the RDO community! This is an important newsletter, as there are some changes coming to the RDO website that you'll want to know about. *ask.rdoproject.org* The Q&A aspects of our forum have now moved to ask.openstack.org, using the #RDO tag to distinguish content that is RDO-specific. The url ask.rdoproject.org is simply a handy redirect to ask.openstack.org showing all questions that are for RDO. The advantages of this move are many, including engaging a larger community of experts, and getting many of the features in the forum which a number of you have expressed frustration about. The transition period is likely to be a little rocky, and we want to ensure that your questions which you've already asked here are not abandoned in the move. There's a few things that you can do to help out, if you're one of the people who have unanswered questions pending at the time of the move. 1) If you want to post a new question, post it at https://ask.openstack.org/en/questions/ask/?tags=rdo If you don't already have an ask.openstack.org account, go ahead and create one. a.o.o accepts OAuth logins, so you can use your Google account, or LaunchPad, or a variety of other OAuth sources. 2) If you want to see the RDO questions that are already on ask.openstack, you can see them at http://ask.rdoproject.org/ We're also working on pulling a feed of those questions over to the RDO site, to make it easier to pick out the RDO-specific conversations. 3) If you already have a question that you've posted to the RDO forum, and it's not answered yet, you might want to move it over to ask.openstack.org where it can get more eyes on it. If you do so, please add [MOVED to a.o.o] in the subject line here, so that we know to go look there. You could also add the URL of the new location in the message itself. 4) If you're a frequent poster on ask.openstack and wish to identify yourself as part of the RDO community, consider adding the RDO logo to your avatar, like Lars: https://ask.openstack.org/en/users/1745/larsks/ *Keep In Touch* Although the forum is already seeing less traffic, we still want to stay in touch. There are several different ways to do this. Subscribe to the main RDO mailing list - rdo-list - at http://www.redhat.com/mailman/listinfo/rdo-list This is the main mailing list for user support, discussion, and community updates. We're on Twitter, at @rdocommunity, and we're on Google Plus, at https://plus.google.com/communities/110409030763231732154 and there's always lots of good content from various RDO engineers at http://openstack.redhat.com/planet/ And there's still the forum - http://openstack.redhat.com/forum/ - where we'll continue to post about events, releases, and other community news. You can subscribe for email updates in your user profile, or you can subscribe to the RSS feed at http://openstack.redhat.com/forum/discussions/feed.rss Finally, you can manage your subscription to this newsletter at http://www.redhat.com/mailman/listinfo/rdo-newsletter *Test Day* We're tentatively planning the first RDO Test Day of 2014 for January 7th and 8th. Details are still a little sketchy, so please watch the rdo-list mailing list and the RDO Blog - http://openstack.redhat.com/blog/ - for updates in the days and weeks to come. We'll be testing the first drops of the Icehouse release. *Other Events *November and early December have been busy with meetups, and there's more to come. On November 13th, RDO was featured on the FLOSS Weekly podcast, and you can listen to/watch the interview at http://twit.tv/show/floss-weekly/273 Dan Radez did a series of meetups in the north eastern United States, hitting Philadelphia, Rocky Hill CT, and New York in three days. You can see his slides at http://www.slideshare.net/danradez/open-stackmeetup-11-2013 OpenStack In Action, Paris, was held December 5th. Kashyap has a writeup at http://kashyapc.wordpress.com/2013/12/08/openstack-in-action-paris-5dec2013-quick-recap/ And the OpenStack CERN meetup was on December 6th - http://www.meetup.com/openstack-ch/events/138151562/ - with 63 people in attendance. OpenStack Israel - http://www.openstack-israel.org/ - was held December 9th in Jaffa and reports are just starting to come in. Follow @OpenStackIL on Twitter for the latest updates. And coming up there's the inaugural Cincinnatti OpenStack Meetup - http://www.meetup.com/openstack-cincinnati/events/140978162/ - on December 17th. I'll be at that one, and will be bringing a little bit of RDO swag along. I'd love to meet some of you. If you're looking a little further out, don't forget that FOSDEM - https://fosdem.org/2014/ - will be in Brussels, January 1st - 2nd, and there will be a Virtualization and IaaS DevRoom there - https://fosdem.org/2014/schedule/track/virtualisation_and_iaas/ - where you can learn a lot about OpenStack, among other things, and meet some of the RDO community. *Articles* As always, the RDO community has been producing a great deal of new content - blogs, wiki pages, and so on. There's a new page in the wiki on the modular layer 2 (ML2) plugin - http://openstack.redhat.com/Modular_Layer_2_(ML2)_Plugin . Adam Young writes about submitting patches to Keystone, in 'Expect the minus one' - http://adam.younglogic.com/2013/11/expect-the-minus-one/ Lars Kellogg-Stedman writes about a collection of useful OpenStack tools at http://blog.oddbit.com/2013/11/12/a-random-collection/ Matthias Runge wrote up his experience at the OpenStack Summit: http://www.matthias-runge.de/wordpress/2013/11/12/openstack-summit-hong-kong/ And there's so much more. You can get all caught up at http://openstack.redhat.com/planet/ *In Closing* Please don't hesitate to send me any comments you may have, either via email (rbowen at redhat.com) or Twitter (@rdocommunity), or on IRC (#rdo on Freenode). We want to know what we're doing right and wrong. -- Rich Bowen - rbowen at redhat.com For the RDO community -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From sgordon at redhat.com Wed Dec 11 04:11:43 2013 From: sgordon at redhat.com (Steve Gordon) Date: Tue, 10 Dec 2013 23:11:43 -0500 (EST) Subject: [Rdo-list] RDO update requires PyOpenSSL 0.12, which is not in EL-based distributions. In-Reply-To: <508750063.14811298.1386734860175.JavaMail.root@redhat.com> Message-ID: <292061079.14811675.1386735103857.JavaMail.root@redhat.com> Hi all, A couple of users have reported on ask.openstack.org [1][2], and via bugzilla [3], that a recent update to python-swiftclient (and possibly other clients) requires PyOpenSSL 0.12. RHEL only ships 0.10, and because this package ships in RHEL there is no version in EPEL. It seems to me the appropriate resolution is to put PyOpenSSL 0.12 (or higher) into RDO or roll back this update? Thanks! Steve [1] https://ask.openstack.org/en/question/8415/python-swiftclient-180-requires-pyopenssl-012/ [2] https://ask.openstack.org/en/question/8414/latest-rdo-havana-update-10-dec-breaks-el-dependencies/ [3] https://bugzilla.redhat.com/show_bug.cgi?id=1040097 From pbrady at redhat.com Wed Dec 11 05:42:53 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 11 Dec 2013 05:42:53 +0000 Subject: [Rdo-list] RDO update requires PyOpenSSL 0.12, which is not in EL-based distributions. In-Reply-To: <292061079.14811675.1386735103857.JavaMail.root@redhat.com> References: <292061079.14811675.1386735103857.JavaMail.root@redhat.com> Message-ID: <52A7FB5D.4010100@redhat.com> On 12/11/2013 04:11 AM, Steve Gordon wrote: > Hi all, > > A couple of users have reported on ask.openstack.org [1][2], and via bugzilla [3], that a recent update to python-swiftclient (and possibly other clients) requires PyOpenSSL 0.12. RHEL only ships 0.10, and because this package ships in RHEL there is no version in EPEL. > > It seems to me the appropriate resolution is to put PyOpenSSL 0.12 (or higher) into RDO or roll back this update? pyOpenSSL 0.13 was put in the RDO repos earlier today. thanks, P?draig. From kchamart at redhat.com Wed Dec 11 16:09:53 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 11 Dec 2013 17:09:53 +0100 Subject: [Rdo-list] Using tmux to do an OpenStack demo Message-ID: <52A88E51.40609@redhat.com> Heya, Just a little while ago, I did an internal demo of a small aspect of OpenStack over a shared terminal using 'tmux' (inspirited by a colleague). Just posting here the details of how/what I did, in-case someone wants to try something similar. It went fairly well as everything just worked :-) Due to time limitation, we discussed three aspects. Pre-requisite: An existing set-up: [1] Flow of a VM [2] Boot from Snapshot [3] Neutron Tenant Network Creation/Boot a guest from this new Tenant Here's the commands (attached here for reference): http://kashyapc.fedorapeople.org/virt/openstack/openstack-demo-commands.txt And, these were my Neutron configs on both Controller/Compute nodes: http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt Setup details: ============== It's a two node OpenStack RDO set-up configured manually on two Fedora 20 VMs (running Nested KVM on Intel). - Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling). - Compute node: Nova (nova-compute), Neutron (openvswitch-agent) Setting up [*] tmux for a shared read-only session: =================================================== -------- $ useradd demo-ostk $ passwd demo-ostk $ yum install tmux -y $ tmux -S /var/tmp/demo-ostk $ chmod 777 /var/tmp/demo-ostk $ cat /home/demo-ostk/run-tmux #!/bin/sh - exec /usr/bin/tmux -S /var/tmp/demo-ostk attach -r $ $ grep demo-ostk /etc/passwd demo-ostk:x:1001:1001::/home/demo-ostk:/home/demo-ostk/run-tmux $ $ chown root.root /home/demo-ostk/run-tmux $ chmod 0555 /home/demo-ostk/run-tmux $ chcon system_u:object_r:bin_t:s0 /home/demo-ostk/run-tmux -------- That's all. Ask your participants to login via the demo user & the read-only session will be presented: $ ssh demo-ostk at IP Caveat: ====== tmux resizes the window to the smallest client (even you're read-only). This is annoying. If you end up doing it inadvertently, you can your participant to undo it, it'll be back to normal on the controlling end. (This is possible by using this setting in tmux.conf -- 'setw -g aggressive-resize on'.) Thanks to Lars Kellog-Stedman for this tip. [*]References: ============== - https://rwmj.wordpress.com/2011/11/23/using-tmux-to-share-a-terminal/ - http://kashyapc.wordpress.com/2011/08/19/share-an-interactive-ssh-session-bn-two-users-with-tmux/ -- /kashyap -------------- next part -------------- [1] FLOW OF A VM ---------------- 0. List different Nova flavors $ nova flavor-list 1. Boot a guest with flavor 1 (i.e. 512 MB memory, and a small disk) $ GLANCE_IMG=$(glance image-list | grep "cirros\ " | awk '{print $2;}') $ nova boot --flavor 1 \ --image $GLANCE_IMG cirr-guest1 2. Ensure it's active: $ nova list And also check in the instance's serial console log, if it *really* acquired DHCP lease: $ nova console-log cirr-guest1 [...] Starting network... udhcpc (v1.20.1) started Sending discover... Sending select for 12.0.0.2... Lease of 12.0.0.2 obtained, lease time 120 deleting routers route: SIOCDELRT: No such process adding dns 192.169.142.1 cirros-ds 'net' up at 13.34 [...] 3. Try SSHing using private IP via namespace # List namespaces $ ip netns qdhcp-c538ce2e-0d90-49a8-a1e4-219745d6079b qrouter-f2df2518-78cb-4ad2-917c-3c1b0e994de7 # Reach internet from the router namespace: $ ip netns exec qrouter-f2df2518-78cb-4ad2-917c-3c1b0e994de7 ping google.com # SSH into the private IP via the router namespace $ ip netns exec qrouter-f2df2518-78cb-4ad2-917c-3c1b0e994de7 ssh cirros at 12.0.0.2 4. Create a Floating IP on the "external" network, and list: $ neutron floatingip-create ext $ neutron floatingip-list 5. Pull Nova guest ID, Floating IP ID and the VM port ID into an environment variables : $ NOVA_GUEST_ID=$(nova list | grep cirr-guest1 | awk '{print $2;}') $ FLOATINGIP_ID=$(neutron floatingip-list | grep 192.169.142.11 | awk '{print $2}') $ VM_PORT_ID=$(neutron port-list --device-id $NOVA_GUEST_ID | grep ip_address | awk '{print $2;}') 6. Associate Floating and Fixed IP (this will take a little bit of time); and do a couple of 'list' operations: === # Associate: $ neutron floatingip-associate $FLOATINGIP_ID $VM_PORT_ID # List the Floating IP addresses to see mapping: $ neutron floatingip-list # List Nova instances: $ nova list $ ping 192.169.142.11 PING 192.169.142.11 (192.169.142.11) 56(84) bytes of data. 64 bytes from 192.169.142.11: icmp_seq=1 ttl=63 time=13.2 ms 64 bytes from 192.169.142.11: icmp_seq=2 ttl=63 time=1.50 ms === 7. SSH into the instance via its floating IP ID === $ ssh cirros at 192.169.142.11 $ sudo -i # shows only the Fixed IP $ ifconfig -a === Some commands to run ~~~~~~~~~~~~~~~~~~~~ $ neutron net-list $ neutron subnet-list # List namespaces(DHCP namespace, Router namespace) $ ip netns [2] BOOT FROM SNAPSHOT ----------------------- Create a snapshot of a running instance: $ nova image-create cirr-guest2 snap1-of-cirr-guest2 List in Glance, to see if it shows up: $ glance image-list Boot via this image: $ nova boot --flavor 1 --image 4cdc2f39-2c64-4145-8011-3c0bb58ff05f vm3 [3] NEUTRON TENANT NETWORKS CREATION ------------------------------------ 1. Source the admin tenant credentials, and pull the SERVICES tenant info into a variable $ . keystonerc_admin 2. Create a tenant $ keystone tenant-create --name demo1 $ keystone user-create --name tuser1 --pass fedora $ keystone user-role-add --user tuser1 --role user --tenant demo1 3. Create an RC file for this user and source the credentials: $ cat >> ~/keystonerc_tuser1 < Put it in your calendar: January 7th and 8th - RDO Test Day We wanted to give you lots of warning, so that you can get it in your calendar. I'll be following up in the coming weeks with a lot more information about what we'll be trying to test. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From kchamart at redhat.com Fri Dec 13 15:38:26 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 13 Dec 2013 16:38:26 +0100 Subject: [Rdo-list] Script to create Neutron tenant networks Message-ID: <52AB29F2.1060301@redhat.com> If you're creating lot of Neutron tenant networks for testing, here's a trivial script that you could you in a loop: https://github.com/kashyapc/ostack-misc/blob/master/create-new-tenant-network.sh -- /kashyap From Tim.Bell at cern.ch Fri Dec 13 17:53:40 2013 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 13 Dec 2013 17:53:40 +0000 Subject: [Rdo-list] RDO packages for Marconi and Barbican Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> Is there a plan to package Marconi and Barbican as RDO packages ? Tim -- [cid:image001.png at 01CEF834.A2D18130] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 12650 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Tim Bell.vcf Type: text/x-vcard Size: 4902 bytes Desc: Tim Bell.vcf URL: From pmyers at redhat.com Fri Dec 13 18:19:19 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 13 Dec 2013 13:19:19 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> Message-ID: <52AB4FA7.1030600@redhat.com> On 12/13/2013 12:53 PM, Tim Bell wrote: > Is there a plan to package Marconi and Barbican as RDO packages ? Hi Tim, Good questions :) I don't know what the concrete timelines are, but certainly I think Marconi (being already incubated) should be packaged for RDO Icehouse in the near future. Flavio, do you have more specific/concrete plans around when that would get done? As for Barbican, I'm a little less certain of that. Mainly because at this point it has not gone up for incubation yet (at least from what I recall). Our general rule of thumb has been to wait for a project to be incubated before packaging. We could certainly make an exception if we feel the need though. Dmitri/Adam, what are your guys' thoughts on Barbican? Perry From ayoung at redhat.com Fri Dec 13 18:27:23 2013 From: ayoung at redhat.com (Adam Young) Date: Fri, 13 Dec 2013 13:27:23 -0500 (EST) Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB4FA7.1030600@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> Message-ID: <1354017391.16838587.1386959243595.JavaMail.root@redhat.com> Lets package Barbican now. It is going to be an important effort, and the IdM team is watching it with interest. There is some question about scope (will it front a CA?), but it is going to be an important part of the security story regardless. ----- Original Message ----- From: "Perry Myers" To: "Tim Bell" , "rdo-list" , "Flavio Percoco" , "Adam Young" , "Dmitri Pal" Sent: Friday, December 13, 2013 1:19:19 PM Subject: Re: [Rdo-list] RDO packages for Marconi and Barbican On 12/13/2013 12:53 PM, Tim Bell wrote: > Is there a plan to package Marconi and Barbican as RDO packages ? Hi Tim, Good questions :) I don't know what the concrete timelines are, but certainly I think Marconi (being already incubated) should be packaged for RDO Icehouse in the near future. Flavio, do you have more specific/concrete plans around when that would get done? As for Barbican, I'm a little less certain of that. Mainly because at this point it has not gone up for incubation yet (at least from what I recall). Our general rule of thumb has been to wait for a project to be incubated before packaging. We could certainly make an exception if we feel the need though. Dmitri/Adam, what are your guys' thoughts on Barbican? Perry From Tim.Bell at cern.ch Fri Dec 13 18:27:29 2013 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 13 Dec 2013 18:27:29 +0000 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB4FA7.1030600@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7655@CERNXCHG02.cern.ch> We don't have urgent requirements for these but they're on our radar... certainly, we would be happy to build on top of your packaging work (and contribute to the testing). It is more of an expectation setting activity... i.e. at what point during a program's lifetime is it worth starting the packaging work ? Too early, the RDO team wastes its time... too late may miss out on a chance to do early testing on a Red Hat base and validate the solution with selinux etc. So, no pressure but when you're ready, shout.... Tim > -----Original Message----- > From: Perry Myers [mailto:pmyers at redhat.com] > Sent: 13 December 2013 19:19 > To: Tim Bell; rdo-list; Flavio Percoco; Adam Young; Dmitri Pal > Subject: Re: [Rdo-list] RDO packages for Marconi and Barbican > > On 12/13/2013 12:53 PM, Tim Bell wrote: > > Is there a plan to package Marconi and Barbican as RDO packages ? > > Hi Tim, > > Good questions :) > > I don't know what the concrete timelines are, but certainly I think Marconi (being already incubated) should be packaged for RDO > Icehouse in the near future. > > Flavio, do you have more specific/concrete plans around when that would get done? > > As for Barbican, I'm a little less certain of that. Mainly because at this point it has not gone up for incubation yet (at least from what I > recall). > > Our general rule of thumb has been to wait for a project to be incubated before packaging. We could certainly make an exception if we > feel the need though. > > Dmitri/Adam, what are your guys' thoughts on Barbican? > > Perry From dpal at redhat.com Fri Dec 13 18:29:37 2013 From: dpal at redhat.com (Dmitri Pal) Date: Fri, 13 Dec 2013 13:29:37 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB4FA7.1030600@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> Message-ID: <52AB5211.5090809@redhat.com> On 12/13/2013 01:19 PM, Perry Myers wrote: > On 12/13/2013 12:53 PM, Tim Bell wrote: >> Is there a plan to package Marconi and Barbican as RDO packages ? > Hi Tim, > > Good questions :) > > I don't know what the concrete timelines are, but certainly I think > Marconi (being already incubated) should be packaged for RDO Icehouse in > the near future. > > Flavio, do you have more specific/concrete plans around when that would > get done? > > As for Barbican, I'm a little less certain of that. Mainly because at > this point it has not gone up for incubation yet (at least from what I > recall). > > Our general rule of thumb has been to wait for a project to be incubated > before packaging. We could certainly make an exception if we feel the > need though. > > Dmitri/Adam, what are your guys' thoughts on Barbican? > > Perry We are not directly involved in Barbican. To the best of my knowledge its primary focus is to provide certificate issuance to the cervices and applications running in the cloud. We have been focusing more on the certificates for the cloud infra itself. The short term plan is to leverage certmonger on the client side (leverading Linux platform under OpenStack) to fetch certs from Certmaster/FreeIPA/Dogtag to bootstrap the undercloud and overcloud and then provide FreeIPA/Dogtag as a back end for Barbican. But for it to be a viable solution upstream Dogtag should be usable in the upstream dev environment so we are workign on making FreeIPA/Dogtag available in Debian for dev purposes. Once it is done we would be able to get in touch with Barbican team again. Absence of Debian availability was a showstopper for a conversation about Barbican. -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From dpal at redhat.com Fri Dec 13 18:30:01 2013 From: dpal at redhat.com (Dmitri Pal) Date: Fri, 13 Dec 2013 13:30:01 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <1354017391.16838587.1386959243595.JavaMail.root@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <1354017391.16838587.1386959243595.JavaMail.root@redhat.com> Message-ID: <52AB5229.1000408@redhat.com> On 12/13/2013 01:27 PM, Adam Young wrote: > Lets package Barbican now. It is going to be an important effort, and the IdM team is watching it with interest. There is some question about scope (will it front a CA?), but it is going to be an important part of the security story regardless. I would not rush for now. > > ----- Original Message ----- > From: "Perry Myers" > To: "Tim Bell" , "rdo-list" , "Flavio Percoco" , "Adam Young" , "Dmitri Pal" > Sent: Friday, December 13, 2013 1:19:19 PM > Subject: Re: [Rdo-list] RDO packages for Marconi and Barbican > > On 12/13/2013 12:53 PM, Tim Bell wrote: >> Is there a plan to package Marconi and Barbican as RDO packages ? > Hi Tim, > > Good questions :) > > I don't know what the concrete timelines are, but certainly I think > Marconi (being already incubated) should be packaged for RDO Icehouse in > the near future. > > Flavio, do you have more specific/concrete plans around when that would > get done? > > As for Barbican, I'm a little less certain of that. Mainly because at > this point it has not gone up for incubation yet (at least from what I > recall). > > Our general rule of thumb has been to wait for a project to be incubated > before packaging. We could certainly make an exception if we feel the > need though. > > Dmitri/Adam, what are your guys' thoughts on Barbican? > > Perry -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pmyers at redhat.com Fri Dec 13 18:37:18 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 13 Dec 2013 13:37:18 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB5211.5090809@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <52AB5211.5090809@redhat.com> Message-ID: <52AB53DE.5070601@redhat.com> On 12/13/2013 01:29 PM, Dmitri Pal wrote: > On 12/13/2013 01:19 PM, Perry Myers wrote: >> On 12/13/2013 12:53 PM, Tim Bell wrote: >>> Is there a plan to package Marconi and Barbican as RDO packages ? >> Hi Tim, >> >> Good questions :) >> >> I don't know what the concrete timelines are, but certainly I think >> Marconi (being already incubated) should be packaged for RDO Icehouse in >> the near future. >> >> Flavio, do you have more specific/concrete plans around when that would >> get done? >> >> As for Barbican, I'm a little less certain of that. Mainly because at >> this point it has not gone up for incubation yet (at least from what I >> recall). >> >> Our general rule of thumb has been to wait for a project to be incubated >> before packaging. We could certainly make an exception if we feel the >> need though. >> >> Dmitri/Adam, what are your guys' thoughts on Barbican? >> >> Perry > We are not directly involved in Barbican. To the best of my knowledge > its primary focus is to provide certificate issuance to the cervices and > applications running in the cloud. We have been focusing more on the > certificates for the cloud infra itself. > The short term plan is to leverage certmonger on the client side > (leverading Linux platform under OpenStack) to fetch certs from > Certmaster/FreeIPA/Dogtag to bootstrap the undercloud and overcloud and > then provide FreeIPA/Dogtag as a back end for Barbican. > But for it to be a viable solution upstream Dogtag should be usable in > the upstream dev environment so we are workign on making FreeIPA/Dogtag > available in Debian for dev purposes. Once it is done we would be able > to get in touch with Barbican team again. Absence of Debian availability > was a showstopper for a conversation about Barbican. All valid points Dmitri, but not relevant to whether or not we package up Barbican for use by the RDO community. My take is that once a project is incubated, we need to put someone on that project to at the least help package it. Barbican isn't there yet, but we'll have to watch and when it's ready we'll do that work. We should not gate RDO packaging of Barbican on whether or not it can use FreeIPA/Dogtag as a backend. From dpal at redhat.com Fri Dec 13 18:54:43 2013 From: dpal at redhat.com (Dmitri Pal) Date: Fri, 13 Dec 2013 13:54:43 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB53DE.5070601@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <52AB5211.5090809@redhat.com> <52AB53DE.5070601@redhat.com> Message-ID: <52AB57F3.10908@redhat.com> On 12/13/2013 01:37 PM, Perry Myers wrote: > On 12/13/2013 01:29 PM, Dmitri Pal wrote: >> On 12/13/2013 01:19 PM, Perry Myers wrote: >>> On 12/13/2013 12:53 PM, Tim Bell wrote: >>>> Is there a plan to package Marconi and Barbican as RDO packages ? >>> Hi Tim, >>> >>> Good questions :) >>> >>> I don't know what the concrete timelines are, but certainly I think >>> Marconi (being already incubated) should be packaged for RDO Icehouse in >>> the near future. >>> >>> Flavio, do you have more specific/concrete plans around when that would >>> get done? >>> >>> As for Barbican, I'm a little less certain of that. Mainly because at >>> this point it has not gone up for incubation yet (at least from what I >>> recall). >>> >>> Our general rule of thumb has been to wait for a project to be incubated >>> before packaging. We could certainly make an exception if we feel the >>> need though. >>> >>> Dmitri/Adam, what are your guys' thoughts on Barbican? >>> >>> Perry >> We are not directly involved in Barbican. To the best of my knowledge >> its primary focus is to provide certificate issuance to the cervices and >> applications running in the cloud. We have been focusing more on the >> certificates for the cloud infra itself. >> The short term plan is to leverage certmonger on the client side >> (leverading Linux platform under OpenStack) to fetch certs from >> Certmaster/FreeIPA/Dogtag to bootstrap the undercloud and overcloud and >> then provide FreeIPA/Dogtag as a back end for Barbican. >> But for it to be a viable solution upstream Dogtag should be usable in >> the upstream dev environment so we are workign on making FreeIPA/Dogtag >> available in Debian for dev purposes. Once it is done we would be able >> to get in touch with Barbican team again. Absence of Debian availability >> was a showstopper for a conversation about Barbican. > All valid points Dmitri, but not relevant to whether or not we package > up Barbican for use by the RDO community. > > My take is that once a project is incubated, we need to put someone on > that project to at the least help package it. Barbican isn't there yet, > but we'll have to watch and when it's ready we'll do that work. > > We should not gate RDO packaging of Barbican on whether or not it can > use FreeIPA/Dogtag as a backend. True. I was just providing context and our thinking about the project. Barbican is an interface to a CA. It is unclear what kind of CA would be supported as a back end out of box. If it is just an API without any back end yet even if it is incubated it might not make much sense to package yet until it becomes possible to use some (not necessarily FreeIPA/Dogtag) publicly available CA project as a backend. We will look at it when it is incubated. -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pmyers at redhat.com Fri Dec 13 19:03:28 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 13 Dec 2013 14:03:28 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB57F3.10908@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <52AB5211.5090809@redhat.com> <52AB53DE.5070601@redhat.com> <52AB57F3.10908@redhat.com> Message-ID: <52AB5A00.2010807@redhat.com> >> We should not gate RDO packaging of Barbican on whether or not it can >> use FreeIPA/Dogtag as a backend. > > True. I was just providing context and our thinking about the project. > Barbican is an interface to a CA. It is unclear what kind of CA would be > supported as a back end out of box. > If it is just an API without any back end yet even if it is incubated it > might not make much sense to package yet until it becomes possible to > use some (not necessarily FreeIPA/Dogtag) publicly available CA project > as a backend. Absolutely. There needs to be SOME reference implementation for it to use, otherwise it's not very useful :) > We will look at it when it is incubated. Yep From markmc at redhat.com Sun Dec 15 14:24:30 2013 From: markmc at redhat.com (Mark McLoughlin) Date: Sun, 15 Dec 2013 14:24:30 +0000 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB53DE.5070601@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <52AB5211.5090809@redhat.com> <52AB53DE.5070601@redhat.com> Message-ID: <1387117470.3160.25.camel@sorcha> On Fri, 2013-12-13 at 13:37 -0500, Perry Myers wrote: > All valid points Dmitri, but not relevant to whether or not we package > up Barbican for use by the RDO community. > > My take is that once a project is incubated, we need to put someone on > that project to at the least help package it. Barbican isn't there yet, > but we'll have to watch and when it's ready we'll do that work. > > We should not gate RDO packaging of Barbican on whether or not it can > use FreeIPA/Dogtag as a backend. Yes, RDO is a "vanilla distribution of upstream OpenStack" - RDO takes what upstream releases and packages it. The point that a project is accepted upstream into incubation feels like the right point to start packaging it and making it available to RDO users for feedback which may help the project get into shape for graduating from incubation. Mark. From flavio at redhat.com Sun Dec 15 16:24:21 2013 From: flavio at redhat.com (Flavio Percoco) Date: Sun, 15 Dec 2013 17:24:21 +0100 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <52AB4FA7.1030600@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> Message-ID: <20131215162421.GK1565@redhat.com> On 13/12/13 13:19 -0500, Perry Myers wrote: >On 12/13/2013 12:53 PM, Tim Bell wrote: >> Is there a plan to package Marconi and Barbican as RDO packages ? > >Hi Tim, > >Good questions :) > >I don't know what the concrete timelines are, but certainly I think >Marconi (being already incubated) should be packaged for RDO Icehouse in >the near future. > >Flavio, do you have more specific/concrete plans around when that would >get done? The plan is to start packaging Marconi as soon as the client library is full-featured. We're not far from there and our current target for this is I-2. We could package Marconi right away but it'll be hard to use / test without the python library. Cheers, FF -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From flavio at redhat.com Sun Dec 15 16:28:24 2013 From: flavio at redhat.com (Flavio Percoco) Date: Sun, 15 Dec 2013 17:28:24 +0100 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7655@CERNXCHG02.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7655@CERNXCHG02.cern.ch> Message-ID: <20131215162824.GL1565@redhat.com> On 13/12/13 18:27 +0000, Tim Bell wrote: > >We don't have urgent requirements for these but they're on our radar... certainly, we would be happy to build on top of your packaging work (and contribute to the testing). Please, if you have any scenario where you'd like to test Marconi in, let us know. :) As already mentioned in my previous email on this thread, we're not far from having a full-featured client library, which will allow for implementing things on top of Marconi. Marconi's team is always in #openstack-marconi @ freenode and you can find me in the #rdo channel as well (flaper87). Cheers, FF -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From Tim.Bell at cern.ch Sun Dec 15 19:44:55 2013 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 15 Dec 2013 19:44:55 +0000 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <1387117470.3160.25.camel@sorcha> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <52AB5211.5090809@redhat.com> <52AB53DE.5070601@redhat.com> <1387117470.3160.25.camel@sorcha> Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D98AD25F@CERNXCHG02.cern.ch> I fully agree with Mark.... at incubation, we know it is coming. If packaging for RDO is a major issue, this should be fed back to the project and the TC as a barrier to be overcome before evolution to a new status. Tim > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Mark McLoughlin > Sent: 15 December 2013 15:25 > To: Perry Myers > Cc: rdo-list; dpal at redhat.com; Flavio Percoco; Adam Young > Subject: Re: [Rdo-list] RDO packages for Marconi and Barbican > > On Fri, 2013-12-13 at 13:37 -0500, Perry Myers wrote: > > All valid points Dmitri, but not relevant to whether or not we package > > up Barbican for use by the RDO community. > > > > My take is that once a project is incubated, we need to put someone on > > that project to at the least help package it. Barbican isn't there > > yet, but we'll have to watch and when it's ready we'll do that work. > > > > We should not gate RDO packaging of Barbican on whether or not it can > > use FreeIPA/Dogtag as a backend. > > Yes, RDO is a "vanilla distribution of upstream OpenStack" - RDO takes what upstream releases and packages it. > > The point that a project is accepted upstream into incubation feels like the right point to start packaging it and making it available to RDO > users for feedback which may help the project get into shape for graduating from incubation. > > Mark. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From rbowen at redhat.com Mon Dec 16 19:32:46 2013 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 16 Dec 2013 14:32:46 -0500 Subject: [Rdo-list] RDO packages for Marconi and Barbican In-Reply-To: <1387117470.3160.25.camel@sorcha> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98A7589@CERNXCHG02.cern.ch> <52AB4FA7.1030600@redhat.com> <52AB5211.5090809@redhat.com> <52AB53DE.5070601@redhat.com> <1387117470.3160.25.camel@sorcha> Message-ID: <52AF555E.5010805@redhat.com> On 12/15/2013 09:24 AM, Mark McLoughlin wrote: > The point that a project is accepted upstream into incubation feels like > the right point to start packaging it and making it available to RDO > users for feedback which may help the project get into shape for > graduating from incubation. +1 If something has reached that point, our users want to be able to experiment with it, and we need to be packaging it. But prior to that feels premature. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From mattdm at fedoraproject.org Tue Dec 17 15:29:44 2013 From: mattdm at fedoraproject.org (Matthew Miller) Date: Tue, 17 Dec 2013 10:29:44 -0500 Subject: [Rdo-list] Official Fedora 20 cloud images available for you now! Message-ID: <20131217152943.GA16182@disco.bu.edu> Fedora 20 is out -- get it now from http://cloud.fedoraproject.org/ glance image-create --name "Fedora 20 x86_64" --disk-format qcow2 \ --container-format bare --is-public true --copy-from \ http://cloud.fedoraproject.org/fedora-20.x86_64.qcow2 >From an OpenStack point of view, one notable addition is the addition of the Heat tools provisioning images. As with Fedoa 19, cloud-init is configured to create a user `fedora` by default. -- Matthew Miller -- Fedora Project Architect -- From rdo-info at redhat.com Tue Dec 17 15:31:39 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 17 Dec 2013 15:31:39 +0000 Subject: [Rdo-list] [RDO] Get Fedora 20 now! Message-ID: <00000143012f1697-d3ee3e0a-b1ab-48c2-9ae2-cf0b6264bf5c-000000@email.amazonses.com> mattdm started a discussion. Get Fedora 20 now! --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/959/get-fedora-20-now Have a great day! From pbrady at redhat.com Wed Dec 18 17:07:33 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 18 Dec 2013 17:07:33 +0000 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.1 update Message-ID: <52B1D655.2010105@redhat.com> The RDO Havana repositories were updated with the latest stable 2013.2.1 update Details of the changes can be drilled down to from: https://launchpad.net/nova/havana/2013.2.1 https://launchpad.net/glance/havana/2013.2.1 https://launchpad.net/horizon/havana/2013.2.1 https://launchpad.net/keystone/havana/2013.2.1 https://launchpad.net/cinder/havana/2013.2.1 https://launchpad.net/quantum/havana/2013.2.1 https://launchpad.net/ceilometer/havana/2013.2.1 https://launchpad.net/heat/havana/2013.2.1 In addition, these RDO specific changes were included: openstack-neutron-2013.2-13 - Remove dnsmasq version warning, bz#997961 - Ensure that disabled services are properly handled on upgrade, bz#1040704 - Add vpnaas/fwaas configs to init scripts, bz#1032450 - Pass neutron rootwrap.conf in sudoers.d/neutron, bz#984097 - Add missing debug and vpnaas rootwrap filters, bz#1034207 thanks, P?draig. From lars at redhat.com Wed Dec 18 20:01:44 2013 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 18 Dec 2013 15:01:44 -0500 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.1 update In-Reply-To: <52B1D655.2010105@redhat.com> References: <52B1D655.2010105@redhat.com> Message-ID: <20131218200144.GA29921@redhat.com> On Wed, Dec 18, 2013 at 05:07:33PM +0000, P?draig Brady wrote: > The RDO Havana repositories were updated with the latest stable 2013.2.1 update > Details of the changes can be drilled down to from: I see these in the epel-6 repository, but not in fedora-19 (http://repos.fedorapeople.org/repos/openstack/openstack-havana/fedora-19/). Is this just a latency issue and they'll eventually show up? -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From pbrady at redhat.com Thu Dec 19 01:21:40 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 19 Dec 2013 01:21:40 +0000 Subject: [Rdo-list] [package announce] Stable Havana 2013.2.1 update In-Reply-To: <20131218200144.GA29921@redhat.com> References: <52B1D655.2010105@redhat.com> <20131218200144.GA29921@redhat.com> Message-ID: <52B24A24.80703@redhat.com> On 12/18/2013 08:01 PM, Lars Kellogg-Stedman wrote: > On Wed, Dec 18, 2013 at 05:07:33PM +0000, P?draig Brady wrote: >> The RDO Havana repositories were updated with the latest stable 2013.2.1 update >> Details of the changes can be drilled down to from: > > I see these in the epel-6 repository, but not in fedora-19 > (http://repos.fedorapeople.org/repos/openstack/openstack-havana/fedora-19/). > Is this just a latency issue and they'll eventually show up? CI tests are completing on those, and they'll show up shortly From pmyers at redhat.com Thu Dec 19 12:45:58 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 19 Dec 2013 07:45:58 -0500 Subject: [Rdo-list] [rhos-list] RDO problem In-Reply-To: References: Message-ID: <52B2EA86.9050301@redhat.com> On 12/19/2013 02:03 AM, Guang Ya GY Liu wrote: > Hi, > > I was following this install guide to install OpenStack with RDO: > http://openstack.redhat.com/Quickstart Hi, I'm going to move this thread over to rdo-list, since that's where the RDO community members hang out. > After install and OpenStack start, every time I boot a new instance, > there will be a new nova-compute start up, after I create two instance, > there are three nova-compute, do you know why the instance create caused > new nova-compute start? > > [root at db03b04 network-scripts(keystone_admin)]# ps -ef | grep nova-com > nova 5174 6099 0 01:07 ? 00:00:00 /usr/bin/python > /usr/bin/nova-compute --logfile /var/log/nova/compute.log <<<<<<<<<< Why? > nova 6099 1 0 Dec18 ? 00:07:47 /usr/bin/python > /usr/bin/nova-compute --logfile /var/log/nova/compute.log > nova 6553 6099 0 Dec18 ? 00:00:00 /usr/bin/python > /usr/bin/nova-compute --logfile /var/log/nova/compute.log <<<<<<<<<< Why? Russell, can you provide a quick explanation here? Cheers, Perry From pbrady at redhat.com Thu Dec 19 13:52:12 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 19 Dec 2013 13:52:12 +0000 Subject: [Rdo-list] [package announce] openstack-selinux update Message-ID: <52B2FA0C.7070404@redhat.com> Havana RDO EL6 has been updated to improve SELinux support for neutron. openstack-selinux-0.1.3-2 http://bugzilla.redhat.com/1039204 - Set neutron-vpn-agent to neutron_exec_t - Require RHEL 6.5 selinux-policy http://bugzilla.redhat.com/1020052 - Set correct file contexts - Add Neutron LBaaS tunable and additional policy From rbryant at redhat.com Thu Dec 19 18:43:29 2013 From: rbryant at redhat.com (Russell Bryant) Date: Thu, 19 Dec 2013 13:43:29 -0500 Subject: [Rdo-list] [rhos-list] RDO problem In-Reply-To: <52B2EA86.9050301@redhat.com> References: <52B2EA86.9050301@redhat.com> Message-ID: <52B33E51.4020201@redhat.com> On 12/19/2013 07:45 AM, Perry Myers wrote: > On 12/19/2013 02:03 AM, Guang Ya GY Liu wrote: >> Hi, >> >> I was following this install guide to install OpenStack with RDO: >> http://openstack.redhat.com/Quickstart > > Hi, > > I'm going to move this thread over to rdo-list, since that's where the > RDO community members hang out. > >> After install and OpenStack start, every time I boot a new instance, >> there will be a new nova-compute start up, after I create two instance, >> there are three nova-compute, do you know why the instance create caused >> new nova-compute start? >> >> [root at db03b04 network-scripts(keystone_admin)]# ps -ef | grep nova-com >> nova 5174 6099 0 01:07 ? 00:00:00 /usr/bin/python >> /usr/bin/nova-compute --logfile /var/log/nova/compute.log <<<<<<<<<< Why? >> nova 6099 1 0 Dec18 ? 00:07:47 /usr/bin/python >> /usr/bin/nova-compute --logfile /var/log/nova/compute.log >> nova 6553 6099 0 Dec18 ? 00:00:00 /usr/bin/python >> /usr/bin/nova-compute --logfile /var/log/nova/compute.log <<<<<<<<<< Why? > > Russell, can you provide a quick explanation here? nova-compute spawns child processes, but not to run multiple instances of nova-compute itself. Perhaps it's just something inaccurate with the process list where the command part didn't get updated... not sure. In this case 6099 is the normal parent nova-compute. The others are suspect. Can you grab an strace log from them? Something like ... # strace -p 5174 -o 5174.txt -- Russell Bryant From dansmith at redhat.com Thu Dec 19 18:46:28 2013 From: dansmith at redhat.com (Dan Smith) Date: Thu, 19 Dec 2013 10:46:28 -0800 Subject: [Rdo-list] [rhos-list] RDO problem In-Reply-To: <52B33E51.4020201@redhat.com> References: <52B2EA86.9050301@redhat.com> <52B33E51.4020201@redhat.com> Message-ID: <52B33F04.2010204@redhat.com> > nova-compute spawns child processes, but not to run multiple instances > of nova-compute itself. > > Perhaps it's just something inaccurate with the process list where the > command part didn't get updated... not sure. In this case 6099 is the > normal parent nova-compute. The others are suspect. > > Can you grab an strace log from them? Something like ... > > # strace -p 5174 -o 5174.txt Also do this, which might be more obvious if it's what we think it is: ls -l /proc/5174/exe --Dan From liugya at cn.ibm.com Thu Dec 19 22:23:56 2013 From: liugya at cn.ibm.com (Guang Ya GY Liu) Date: Fri, 20 Dec 2013 06:23:56 +0800 Subject: [Rdo-list] [rhos-list] RDO problem In-Reply-To: <52B33F04.2010204@redhat.com> References: <52B2EA86.9050301@redhat.com> <52B33E51.4020201@redhat.com> <52B33F04.2010204@redhat.com> Message-ID: Thanks Russell and Dan, the following are the output that you need. [root at db03b04 ~]# cat 5174.txt restart_syscall(<... resuming interrupted call ...>) = 0 kill(5173, SIG_0) = 0 kill(6099, SIG_0) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTORER, 0x7f9827095500}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({2, 0}, 0x7f97ebffcb80) = 0 kill(5173, SIG_0) = 0 kill(6099, SIG_0) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTORER, 0x7f9827095500}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({2, 0}, 0x7f97ebffcb80) = 0 kill(5173, SIG_0) = 0 kill(6099, SIG_0) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTORER, 0x7f9827095500}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({2, 0}, 0x7f97ebffcb80) = 0 kill(5173, SIG_0) = 0 kill(6099, SIG_0) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTORER, 0x7f9827095500}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({2, 0}, 0x7f97ebffcb80) = 0 kill(5173, SIG_0) = 0 kill(6099, SIG_0) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 [root at db03b04 ~]# ls -l /proc/5174/exe lrwxrwxrwx 1 nova nova 0 Dec 19 01:27 /proc/5174/exe -> /usr/bin/python Thanks, Guangya From: Dan Smith To: Russell Bryant , Perry Myers , Guang Ya GY Liu/China/IBM at IBMCN, rdo-list Date: 2013/12/20 02:46 Subject: Re: [rhos-list] RDO problem > nova-compute spawns child processes, but not to run multiple instances > of nova-compute itself. > > Perhaps it's just something inaccurate with the process list where the > command part didn't get updated... not sure. In this case 6099 is the > normal parent nova-compute. The others are suspect. > > Can you grab an strace log from them? Something like ... > > # strace -p 5174 -o 5174.txt Also do this, which might be more obvious if it's what we think it is: ls -l /proc/5174/exe --Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Fri Dec 20 12:58:47 2013 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 20 Dec 2013 12:58:47 +0000 Subject: [Rdo-list] Packstack error Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D98BF093@CERNXCHG02.cern.ch> Any ideas how to fix ? I'm running Scientific Linux 6.5. Running packstack (turning off neutron) ERROR : Error appeared during Puppet run: 128.142.135.93_horizon.pp Error: Parameter name failed on Package[horizon-packages]: Name must be a String not Array at /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/manifests/128.142.135.93_horizon.pp:7 You will find full trace in log /var/tmp/packstack/20131220-135003-LzYOG0/manifests/128.142.135.93_horizon.pp.log The additional logs just report the same thing (i.e. Name must be a String not Array). cd /usr/lib/python2.6/site-packages/packstack/puppet/modules tar --dereference -cpzf - apache ceilometer certmonger cinder concat firewall glance heat horizon inifile keystone memcached mongodb mysql neutron nova nssdb openstack packstack qpid rsync ssh stdlib swift sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root at 128.142.135.93 tar -C /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/modules -xpzf - 2013-12-20 13:52:12::ERROR::run_setup::909::root:: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 904, in main _main(confFile) File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 573, in _main runSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 552, in runSequences controller.runAllSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 84, in runAllSequences sequence.run(self.CONF) File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 105, in run step.run(config=config) File "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", line 52, in run raise SequenceError(str(ex)) SequenceError: Error appeared during Puppet run: 128.142.135.93_horizon.pp Error: Parameter name failed on Package[horizon-packages]: Name must be a String not Array at /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/manifests/128.142.135.93_horizon.pp:7 You will find full trace in log /var/tmp/packstack/20131220-135003-LzYOG0/manifests/128.142.135.93_horizon.pp.log [cid:image001.png at 01CEFD89.483D8020] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 12650 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Tim Bell.vcf Type: text/x-vcard Size: 4902 bytes Desc: Tim Bell.vcf URL: From dron at redhat.com Fri Dec 20 13:07:06 2013 From: dron at redhat.com (Dafna Ron) Date: Fri, 20 Dec 2013 15:07:06 +0200 Subject: [Rdo-list] Packstack error In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5D98BF093@CERNXCHG02.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98BF093@CERNXCHG02.cern.ch> Message-ID: <52B440FA.9030404@redhat.com> I think its the new puppet: https://ask.openstack.org/en/question/9208/got-error-when-perform-havana-install/ On 12/20/2013 02:58 PM, Tim Bell wrote: > > Any ideas how to fix ? I?m running Scientific Linux 6.5. > > Running packstack (turning off neutron) > > ERROR : Error appeared during Puppet run: 128.142.135.93_horizon.pp > > Error: Parameter name failed on Package[horizon-packages]: Name must > be a String not Array at > /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/manifests/128.142.135.93_horizon.pp:7 > > You will find full trace in log > /var/tmp/packstack/20131220-135003-LzYOG0/manifests/128.142.135.93_horizon.pp.log > > The additional logs just report the same thing (i.e. Name must be a > String not Array). > > cd /usr/lib/python2.6/site-packages/packstack/puppet/modules > > tar --dereference -cpzf - apache ceilometer certmonger cinder concat > firewall glance heat horizon inifile keystone memcached mongodb mysql > neutron nova nssdb openstack packstack qpid rsync ssh stdlib swift > sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o > StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null > root at 128.142.135.93 tar -C > /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/modules -xpzf - > > 2013-12-20 13:52:12::ERROR::run_setup::909::root:: Traceback (most > recent call last): > > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 904, in main > > _main(confFile) > > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 573, in _main > > runSequences() > > File > "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", > line 552, in runSequences > > controller.runAllSequences() > > File > "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", > line 84, in runAllSequences > > sequence.run(self.CONF) > > File > "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", > line 105, in run > > step.run(config=config) > > File > "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", > line 52, in run > > raise SequenceError(str(ex)) > > SequenceError: Error appeared during Puppet run: 128.142.135.93_horizon.pp > > Error: Parameter name failed on Package[horizon-packages]: Name must > be a String not Array at > /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/manifests/128.142.135.93_horizon.pp:7 > > You will find full trace in log > /var/tmp/packstack/20131220-135003-LzYOG0/manifests/128.142.135.93_horizon.pp.log > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Dafna Ron From morazi at redhat.com Fri Dec 20 13:24:46 2013 From: morazi at redhat.com (Mike Orazi) Date: Fri, 20 Dec 2013 08:24:46 -0500 Subject: [Rdo-list] Packstack error In-Reply-To: <52B440FA.9030404@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98BF093@CERNXCHG02.cern.ch> <52B440FA.9030404@redhat.com> Message-ID: <52B4451E.307@redhat.com> Sunil filed a similar report around dnsmasq: https://bugzilla.redhat.com/show_bug.cgi?id=1045283 that looks similar in terms of the string/array expectations. We're investigating the matter and will release an update as soon we can. Thanks, Mike On 12/20/2013 08:07 AM, Dafna Ron wrote: > I think its the new puppet: > > https://ask.openstack.org/en/question/9208/got-error-when-perform-havana-install/ > > > > On 12/20/2013 02:58 PM, Tim Bell wrote: >> >> Any ideas how to fix ? I?m running Scientific Linux 6.5. >> >> Running packstack (turning off neutron) >> >> ERROR : Error appeared during Puppet run: 128.142.135.93_horizon.pp >> >> Error: Parameter name failed on Package[horizon-packages]: Name must >> be a String not Array at >> /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/manifests/128.142.135.93_horizon.pp:7 >> >> >> You will find full trace in log >> /var/tmp/packstack/20131220-135003-LzYOG0/manifests/128.142.135.93_horizon.pp.log >> >> >> The additional logs just report the same thing (i.e. Name must be a >> String not Array). >> >> cd /usr/lib/python2.6/site-packages/packstack/puppet/modules >> >> tar --dereference -cpzf - apache ceilometer certmonger cinder concat >> firewall glance heat horizon inifile keystone memcached mongodb mysql >> neutron nova nssdb openstack packstack qpid rsync ssh stdlib swift >> sysctl tempest vcsrepo vlan vswitch xinetd | ssh -o >> StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null >> root at 128.142.135.93 tar -C >> /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/modules -xpzf - >> >> 2013-12-20 13:52:12::ERROR::run_setup::909::root:: Traceback (most >> recent call last): >> >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 904, in main >> >> _main(confFile) >> >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 573, in _main >> >> runSequences() >> >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", >> line 552, in runSequences >> >> controller.runAllSequences() >> >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", >> line 84, in runAllSequences >> >> sequence.run(self.CONF) >> >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", >> line 105, in run >> >> step.run(config=config) >> >> File >> "/usr/lib/python2.6/site-packages/packstack/installer/core/sequences.py", >> line 52, in run >> >> raise SequenceError(str(ex)) >> >> SequenceError: Error appeared during Puppet run: >> 128.142.135.93_horizon.pp >> >> Error: Parameter name failed on Package[horizon-packages]: Name must >> be a String not Array at >> /var/tmp/packstack/805bcb724cfe4f8499e799d1795f3a0c/manifests/128.142.135.93_horizon.pp:7 >> >> >> You will find full trace in log >> /var/tmp/packstack/20131220-135003-LzYOG0/manifests/128.142.135.93_horizon.pp.log >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > > From pbrady at redhat.com Fri Dec 20 17:01:22 2013 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Fri, 20 Dec 2013 17:01:22 +0000 Subject: [Rdo-list] Packstack error In-Reply-To: <52B4451E.307@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98BF093@CERNXCHG02.cern.ch> <52B440FA.9030404@redhat.com> <52B4451E.307@redhat.com> Message-ID: <52B477E2.5070309@redhat.com> On 12/20/2013 01:24 PM, Mike Orazi wrote: > Sunil filed a similar report around dnsmasq: > > https://bugzilla.redhat.com/show_bug.cgi?id=1045283 > > that looks similar in terms of the string/array expectations. We're > investigating the matter and will release an update as soon we can. Now fixed in the RDO repos. P?draig. From pbrady at redhat.com Fri Dec 20 18:23:04 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Fri, 20 Dec 2013 18:23:04 +0000 Subject: [Rdo-list] [package announce] openstack-packstack update Message-ID: <52B48B08.8030306@redhat.com> Havana RDO packstack has been updated as follows. openstack-packstack-2013.2.1-0.25.dev936 Fix syntax (disallowed by puppet-3.4) for installing packages (rhbz#1045283) Use class for notifier strategy (rhbz#1020002) CONFIG_NEUTRON_LBAAS_HOSTS should be empty in allinone (rhbz#1040585) service_plugins must not be list with empty string (rhbz#1040585) Allow Ceilometer API for all hosts (rhbz#1040404) Require also core_plugin setting NEUTRON_LBAAS_HOSTS should be empty by default (rhbz#1040039) Upgrades DB before neutron server starts (rhbz#1037675) Adds localhost to the list of Horizon's ALLOWED_HOSTS Doesn't set up the L3_EXT_BRIDGE twice(rhbz#1000981) Updates puppet-nova module (#1015995) Adds support for LBaaS agent (#1019780) Adds validation for gluster volumes using hostnames (#1020479) Validates type of given ssh key (#1022477) Doesn't touch NetworkManager and iptables (rhbz#1024292, rhbz#1023955) Updates puppet-certmonger module and puppet-pacemaker module (rhbz#1027455) Adds puppet-openstack-storage (rhbz#1027460) Adds missing options to packstack man page (rhbz#1032103) Adds support for cinder::backup::swift (rhbz#1021627) Adds auth option to qpid (rhbz#972643) Fixes errors when nova is disabled (rhbz#987888, rhbz#1024564, rhbz#1026795) Fixes the nova_ceilometer.pp template (rhbz#1032070) Fixes heat installer when executed in interactive mode Updates puppet-neutron module to latest stable/havana branch (rhbz#1017280) Added the help_url pointing to RH doc (rhbz#1030398) From Tim.Bell at cern.ch Sun Dec 22 18:52:12 2013 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 22 Dec 2013 18:52:12 +0000 Subject: [Rdo-list] Ceilometer concepts description Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5D98CD75C@CERNXCHG02.cern.ch> I've noticed that the ceilometer concepts explanation in docs.openstack.org are rather weak (https://bugs.launchpad.net/bugs/1263500) The best definition of the contepts that I've seen is in http://openstack.redhat.com/CeilometerQuickStart What are the license conditions for RDO documentation ? Are we permitted to copy/paste the RDO description to the standard OpenStack docs ? Tim -- [cid:image001.png at 01CEFF4F.4EA664F0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 12650 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Tim Bell.vcf Type: text/x-vcard Size: 4902 bytes Desc: Tim Bell.vcf URL: From kchamart at redhat.com Sun Dec 22 19:51:52 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sun, 22 Dec 2013 20:51:52 +0100 Subject: [Rdo-list] Ceilometer concepts description In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5D98CD75C@CERNXCHG02.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98CD75C@CERNXCHG02.cern.ch> Message-ID: <52B742D8.9080602@redhat.com> On 12/22/2013 07:52 PM, Tim Bell wrote: > > I've noticed that the ceilometer concepts explanation in docs.openstack.org are rather weak (https://bugs.launchpad.net/bugs/1263500) > > The best definition of the contepts that I've seen is in http://openstack.redhat.com/CeilometerQuickStart > > What are the license conditions for RDO documentation ? >From my reading, it is CC by SA (Creative Commons Attribution Share Alike). At-least when you try to create a wiki page on RDO wiki, you see a text: "Please note that all contributions to RDO are considered to be released under the Creative Commons Attribution Share Alike (see RDO:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here." > Are we permitted to copy/paste the RDO description to the standard OpenStack docs ? CC by SA description: http://creativecommons.org/licenses/by-sa/3.0/ -- /kashyap From bderzhavets at hotmail.com Mon Dec 23 12:02:00 2013 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 23 Dec 2013 07:02:00 -0500 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 Message-ID: Original document http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ contains the sample of answer file. In my situation p37p1 is eth0 p4p1 is eth1 and I've tried to make a substitution in answer-file . packstack fails with error : Welcome to Installer setup utility Packstack changed given value to required value /root/.ssh/id_rsa.pub Packstack changed given value 192.168.1.137:/CINDER-VOLUMES to required value '192.168.1.137:/CINDER-VOLUMES' Installing: Clean Up... [ DONE ] Setting up ssh keys... [ DONE ] Discovering hosts' details... [ DONE ] Adding pre install manifest entries... [ DONE ] Installing time synchronization via NTP... [ DONE ] Adding MySQL manifest entries... [ DONE ] Adding QPID manifest entries... [ DONE ] Adding Keystone manifest entries... [ DONE ] Adding Glance Keystone manifest entries... [ DONE ] Adding Glance manifest entries... [ DONE ] Installing dependencies for Cinder... [ DONE ] Adding Cinder Keystone manifest entries... [ DONE ] Adding Cinder manifest entries... [ DONE ] Adding Nova API manifest entries... [ DONE ] Adding Nova Keystone manifest entries... [ DONE ] Adding Nova Cert manifest entries... [ DONE ] Adding Nova Conductor manifest entries... [ DONE ] Adding Nova Compute manifest entries... [ DONE ] Adding Nova Scheduler manifest entries... [ DONE ] Adding Nova VNC Proxy manifest entries... [ DONE ] Adding Nova Common manifest entries... [ DONE ] Adding Openstack Network-related Nova manifest entries...[ DONE ] Adding Neutron API manifest entries... [ DONE ] Adding Neutron Keystone manifest entries... [ DONE ] Adding Neutron L3 manifest entries... [ DONE ] Adding Neutron L2 Agent manifest entries... [ DONE ] Adding Neutron DHCP Agent manifest entries... [ DONE ] Adding Neutron LBaaS Agent manifest entries... [ DONE ] Adding Neutron Metadata Agent manifest entries... [ DONE ] Adding OpenStack Client manifest entries... [ DONE ] Adding Horizon manifest entries... [ DONE ] Adding Heat manifest entries... [ DONE ] Adding Heat Keystone manifest entries... [ DONE ] Adding Ceilometer manifest entries... [ DONE ] Adding Ceilometer Keystone manifest entries... [ DONE ] Adding post install manifest entries... [ DONE ] Preparing servers... [ DONE ] Installing Dependencies... [ DONE ] Copying Puppet modules and manifests... [ DONE ] Applying Puppet manifests... Applying 192.168.1.127_prescript.pp Applying 192.168.1.137_prescript.pp 192.168.1.127_prescript.pp : [ DONE ] 192.168.1.137_prescript.pp : [ DONE ] Applying 192.168.1.127_ntpd.pp Applying 192.168.1.137_ntpd.pp 192.168.1.137_ntpd.pp : [ DONE ] 192.168.1.127_ntpd.pp : [ DONE ] Applying 192.168.1.137_mysql.pp Applying 192.168.1.137_qpid.pp 192.168.1.137_mysql.pp : [ DONE ] 192.168.1.137_qpid.pp : [ DONE ] Applying 192.168.1.137_keystone.pp Applying 192.168.1.137_glance.pp Applying 192.168.1.137_cinder.pp 192.168.1.137_keystone.pp : [ DONE ] 192.168.1.137_glance.pp : [ DONE ] 192.168.1.137_cinder.pp : [ DONE ] Applying 192.168.1.137_api_nova.pp 192.168.1.137_api_nova.pp : [ DONE ] Applying 192.168.1.137_nova.pp Applying 192.168.1.127_nova.pp 192.168.1.137_nova.pp : [ DONE ] 192.168.1.127_nova.pp : [ DONE ] Applying 192.168.1.127_neutron.pp Applying 192.168.1.137_neutron.pp [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp Error: Validate method failed for class sleep: implicit argument passing of super from method defined by define_method() is not supported. Specify all arguments explicitly. Disabling Fedora native Ethernet names and switch to eth0 & eth1 on F19 gives same error. This make an impression that only `packstack --allinone` could work on F19 . I just cannot configure Neutron on Compute Node with native fedora's ethernet names. It works fine on CentOS 6.4 I would guess that same question would come up for oVirt 3.3.2 Neutron provider on F19 boxes. How create a server , how create Neutron plugin for oVirt ? My question is :- How would look Andrew's https://gist.github.com/andrewklau/7622535/raw/7dac55bbecc200cfb4bf040b6189f36897fc4efb/multi-node.packstack for Fedora 19 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Mon Dec 23 15:46:35 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 23 Dec 2013 15:46:35 +0000 Subject: [Rdo-list] Ceilometer concepts description In-Reply-To: <52B742D8.9080602@redhat.com> References: <5D7F9996EA547448BC6C54C8C5AAF4E5D98CD75C@CERNXCHG02.cern.ch> <52B742D8.9080602@redhat.com> Message-ID: <52B85ADB.6080401@redhat.com> On 12/22/2013 07:51 PM, Kashyap Chamarthy wrote: > On 12/22/2013 07:52 PM, Tim Bell wrote: >> Are we permitted to copy/paste the RDO description to the standard OpenStack docs ? > > CC by SA description: http://creativecommons.org/licenses/by-sa/3.0/ Both doc sets are under this same licence. In any case non RDO specific info/code should bubble upstream, so Tim it's much appreciated that you're doing this. thanks, P?draig. From pbrady at redhat.com Mon Dec 23 23:03:54 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 23 Dec 2013 23:03:54 +0000 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 In-Reply-To: References: Message-ID: <52B8C15A.9080806@redhat.com> On 12/23/2013 12:02 PM, Boris Derzhavets wrote: > Original document http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ > contains the sample of answer file. In my situation p37p1 is eth0 p4p1 is eth1 and I've tried to make a substitution in answer-file . packstack fails with error : > Applying 192.168.1.137_neutron.pp > [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp > Error: Validate method failed for class sleep: implicit argument passing of super from method defined by define_method() is not supported. Specify all arguments explicitly. What are the versions of packstack and puppet on your centos and Fedora 19 systems? They should be the same if going from a pristine setup, but if not it would be worth trying to match the F19 versions to the working centos one. If they are in fact the same then it must come down to different logic in the puppet modules for Fedora and EL systems, which we'd need to look into. You might get more indication of the particular error in: /var/tmp/packstack/...neutron...log thanks, P?draig. From bderzhavets at hotmail.com Tue Dec 24 10:28:23 2013 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 24 Dec 2013 05:28:23 -0500 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 In-Reply-To: <52B8C15A.9080806@redhat.com> References: , <52B8C15A.9080806@redhat.com> Message-ID: I have attached both logs. Traces are there. My version on F19 :- [root at openstack1 ~]# rpm -qa | grep openstack openstack-ceilometer-common-2013.2.1-1.fc20.noarch openstack-nova-compute-2013.2.1-2.fc20.noarch openstack-nova-common-2013.2.1-2.fc20.noarch openstack-nova-cert-2013.2.1-2.fc20.noarch openstack-utils-2013.2-2.fc20.noarch openstack-glance-2013.2.1-1.fc20.noarch openstack-ceilometer-compute-2013.2.1-1.fc20.noarch openstack-packstack-2013.2.1-0.25.dev936.fc20.noarch openstack-nova-conductor-2013.2.1-2.fc20.noarch openstack-nova-scheduler-2013.2.1-2.fc20.noarch openstack-keystone-2013.2.1-1.fc20.noarch openstack-nova-api-2013.2.1-2.fc20.noarch openstack-nova-console-2013.2.1-2.fc20.noarch openstack-nova-novncproxy-2013.2.1-2.fc20.noarch openstack-cinder-2013.2.1-1.fc20.noarch [root at openstack1 ~]# uname -a Linux openstack1.localdomain 3.12.5-200.fc19.x86_64 #1 SMP Tue Dec 17 22:21:14 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux System on CentOS 6.4 doesn't belong to me. I am a bit hesitant to ask system's admin too much questions. I am sorry. Boris. > Date: Mon, 23 Dec 2013 23:03:54 +0000 > From: pbrady at redhat.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 > > On 12/23/2013 12:02 PM, Boris Derzhavets wrote: > > Original document http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ > > contains the sample of answer file. In my situation p37p1 is eth0 p4p1 is eth1 and I've tried to make a substitution in answer-file . packstack fails with error : > > > Applying 192.168.1.137_neutron.pp > > [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp > > Error: Validate method failed for class sleep: implicit argument passing of super from method defined by define_method() is not supported. Specify all arguments explicitly. > > What are the versions of packstack and puppet on your centos and Fedora 19 systems? > They should be the same if going from a pristine setup, but if not it would > be worth trying to match the F19 versions to the working centos one. > > If they are in fact the same then it must come down to different > logic in the puppet modules for Fedora and EL systems, which we'd > need to look into. > > You might get more indication of the particular error in: > /var/tmp/packstack/...neutron...log > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: openstack-setup.log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: 192.168.1.127_neutron.pp.log URL: From rdo-info at redhat.com Tue Dec 24 13:45:28 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 24 Dec 2013 13:45:28 +0000 Subject: [Rdo-list] [RDO] hiphopiop 9uVo Message-ID: <0000014324da644c-e17263e0-1fb9-4896-832f-f7b436aa3097-000000@email.amazonses.com> jac1k7123 started a discussion. hiphopiop 9uVo --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/960/hiphopiop-9uvo Have a great day! From rdo-info at redhat.com Tue Dec 24 13:46:30 2013 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 24 Dec 2013 13:46:30 +0000 Subject: [Rdo-list] [RDO] hiphopiop 8gTr Message-ID: <0000014324db538c-7d653dda-ce6b-456a-a397-cfe542984f9a-000000@email.amazonses.com> jac1k7123 started a discussion. hiphopiop 8gTr --- Follow the link below to check it out: http://openstack.redhat.com/forum/discussion/961/hiphopiop-8gtr Have a great day! From bderzhavets at hotmail.com Wed Dec 25 07:26:29 2013 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 25 Dec 2013 02:26:29 -0500 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 (2) In-Reply-To: <52B8C15A.9080806@redhat.com> References: , <52B8C15A.9080806@redhat.com> Message-ID: I've just found http://kashyapc.wordpress.com/tag/fedora/ It's done not via packstack and with eth(X) network interfaces Thanks. Boris. > Date: Mon, 23 Dec 2013 23:03:54 +0000 > From: pbrady at redhat.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 > > On 12/23/2013 12:02 PM, Boris Derzhavets wrote: > > Original document http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ > > contains the sample of answer file. In my situation p37p1 is eth0 p4p1 is eth1 and I've tried to make a substitution in answer-file . packstack fails with error : > > > Applying 192.168.1.137_neutron.pp > > [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp > > Error: Validate method failed for class sleep: implicit argument passing of super from method defined by define_method() is not supported. Specify all arguments explicitly. > > What are the versions of packstack and puppet on your centos and Fedora 19 systems? > They should be the same if going from a pristine setup, but if not it would > be worth trying to match the F19 versions to the working centos one. > > If they are in fact the same then it must come down to different > logic in the puppet modules for Fedora and EL systems, which we'd > need to look into. > > You might get more indication of the particular error in: > /var/tmp/packstack/...neutron...log > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at andrewklau.com Wed Dec 25 13:53:42 2013 From: andrew at andrewklau.com (Andrew Lau) Date: Thu, 26 Dec 2013 00:53:42 +1100 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 In-Reply-To: References: <52B8C15A.9080806@redhat.com> Message-ID: Hey, Rather than just dropping in my packstack file, try generate the default answer file for Fedora and just modify the variables. Merry Christmas! Cheers , Andrew On Tue, Dec 24, 2013 at 9:28 PM, Boris Derzhavets wrote: > I have attached both logs. Traces are there. > > My version on F19 :- > > [root at openstack1 ~]# rpm -qa | grep openstack > openstack-ceilometer-common-2013.2.1-1.fc20.noarch > openstack-nova-compute-2013.2.1-2.fc20.noarch > openstack-nova-common-2013.2.1-2.fc20.noarch > openstack-nova-cert-2013.2.1-2.fc20.noarch > openstack-utils-2013.2-2.fc20.noarch > openstack-glance-2013.2.1-1.fc20.noarch > openstack-ceilometer-compute-2013.2.1-1.fc20.noarch > openstack-packstack-2013.2.1-0.25.dev936.fc20.noarch > openstack-nova-conductor-2013.2.1-2.fc20.noarch > openstack-nova-scheduler-2013.2.1-2.fc20.noarch > openstack-keystone-2013.2.1-1.fc20.noarch > openstack-nova-api-2013.2.1-2.fc20.noarch > openstack-nova-console-2013.2.1-2.fc20.noarch > openstack-nova-novncproxy-2013.2.1-2.fc20.noarch > openstack-cinder-2013.2.1-1.fc20.noarch > > [root at openstack1 ~]# uname -a > Linux openstack1.localdomain 3.12.5-200.fc19.x86_64 #1 SMP Tue Dec 17 > 22:21:14 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux > > > System on CentOS 6.4 doesn't belong to me. I am a bit hesitant to ask > system's admin too much > questions. I am sorry. > > Boris. > > > > Date: Mon, 23 Dec 2013 23:03:54 +0000 > > From: pbrady at redhat.com > > To: bderzhavets at hotmail.com > > CC: rdo-list at redhat.com > > Subject: Re: [Rdo-list] Attempt to reproduce Getting Started with > Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew > Lau on F19 > > > > > On 12/23/2013 12:02 PM, Boris Derzhavets wrote: > > > Original document > http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ > > > contains the sample of answer file. In my situation p37p1 is eth0 p4p1 > is eth1 and I've tried to make a substitution in answer-file . packstack > fails with error : > > > > > Applying 192.168.1.137_neutron.pp > > > [ ERROR ] > > > > > > ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp > > > Error: Validate method failed for class sleep: implicit argument > passing of super from method defined by define_method() is not supported. > Specify all arguments explicitly. > > > > What are the versions of packstack and puppet on your centos and Fedora > 19 systems? > > They should be the same if going from a pristine setup, but if not it > would > > be worth trying to match the F19 versions to the working centos one. > > > > If they are in fact the same then it must come down to different > > logic in the puppet modules for Fedora and EL systems, which we'd > > need to look into. > > > > You might get more indication of the particular error in: > > /var/tmp/packstack/...neutron...log > > > > thanks, > > P?draig. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Dec 25 14:18:51 2013 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 25 Dec 2013 09:18:51 -0500 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 In-Reply-To: References: <52B8C15A.9080806@redhat.com>, , Message-ID: I did. But no eth0,eth1,eth2. There are em1,p37p1,p4p1 and under /etc/sysconfig/network-scripts [root at hv02 network-scripts]# ls -l total 212 -rw-r--r--. 1 root root 102 Dec 24 15:22 ifcfg-enp2s0 -rw-r--r--. 1 root root 288 Dec 24 14:29 ifcfg-enp5s1 -rw-r--r--. 1 root root 288 Dec 24 14:29 ifcfg-enp5s2 -rw-r--r--. 1 root root 254 May 31 2013 ifcfg-lo which are different from usual ifcfg-eth(X) [root at hv02 network-scripts]# cat ifcfg-enp5s1 TYPE=Ethernet BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=enp5s1 UUID=68e1cb5d-cafa-47e7-aa2e-1ce6d4632bfa ONBOOT=no HWADDR=00:E0:53:13:17:4C PEERDNS=yes PEERROUTES=yes No DEVICE=p(XX)p(Y) Only MAC address allows to understand what is what ? Looks like parser gets confused working with ifcfg-enp* files View also http://kashyapc.wordpress.com/tag/fedora/ Set up a bit different from yours , but with same idea Controller + Compute, is done manually not via packstack with f20 core ( no Ethernet interfaces renaming just ifcfg-eth0 and etc ) Merry Christmas! Thanks. Boris. P.S. I am currently testing CentOS 6.5. libgfapi should be backported :) From: andrew at andrewklau.com Date: Thu, 26 Dec 2013 00:53:42 +1100 Subject: Re: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 To: bderzhavets at hotmail.com CC: pbrady at redhat.com; rdo-list at redhat.com Hey, Rather than just dropping in my packstack file, try generate the default answer file for Fedora and just modify the variables. Merry Christmas!Cheers, Andrew On Tue, Dec 24, 2013 at 9:28 PM, Boris Derzhavets wrote: I have attached both logs. Traces are there. My version on F19 :- [root at openstack1 ~]# rpm -qa | grep openstack openstack-ceilometer-common-2013.2.1-1.fc20.noarch openstack-nova-compute-2013.2.1-2.fc20.noarch openstack-nova-common-2013.2.1-2.fc20.noarch openstack-nova-cert-2013.2.1-2.fc20.noarch openstack-utils-2013.2-2.fc20.noarch openstack-glance-2013.2.1-1.fc20.noarch openstack-ceilometer-compute-2013.2.1-1.fc20.noarch openstack-packstack-2013.2.1-0.25.dev936.fc20.noarch openstack-nova-conductor-2013.2.1-2.fc20.noarch openstack-nova-scheduler-2013.2.1-2.fc20.noarch openstack-keystone-2013.2.1-1.fc20.noarch openstack-nova-api-2013.2.1-2.fc20.noarch openstack-nova-console-2013.2.1-2.fc20.noarch openstack-nova-novncproxy-2013.2.1-2.fc20.noarch openstack-cinder-2013.2.1-1.fc20.noarch [root at openstack1 ~]# uname -a Linux openstack1.localdomain 3.12.5-200.fc19.x86_64 #1 SMP Tue Dec 17 22:21:14 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux System on CentOS 6.4 doesn't belong to me. I am a bit hesitant to ask system's admin too much questions. I am sorry. Boris. > Date: Mon, 23 Dec 2013 23:03:54 +0000 > From: pbrady at redhat.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 > > On 12/23/2013 12:02 PM, Boris Derzhavets wrote: > > Original document http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ > > contains the sample of answer file. In my situation p37p1 is eth0 p4p1 is eth1 and I've tried to make a substitution in answer-file . packstack fails with error : > > > Applying 192.168.1.137_neutron.pp > > [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp > > Error: Validate method failed for class sleep: implicit argument passing of super from method defined by define_method() is not supported. Specify all arguments explicitly. > > What are the versions of packstack and puppet on your centos and Fedora 19 systems? > They should be the same if going from a pristine setup, but if not it would > be worth trying to match the F19 versions to the working centos one. > > If they are in fact the same then it must come down to different > logic in the puppet modules for Fedora and EL systems, which we'd > need to look into. > > You might get more indication of the particular error in: > /var/tmp/packstack/...neutron...log > > thanks, > P?draig. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Dec 25 14:26:22 2013 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 25 Dec 2013 09:26:22 -0500 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 (3) In-Reply-To: <52B8C15A.9080806@redhat.com> References: , <52B8C15A.9080806@redhat.com> Message-ID: > What are the versions of packstack and puppet on your centos and Fedora 19 systems?Here goes the answer for CentOS :- [root at openstack2 ~]# rpm -qa | grep neutron python-neutronclient-2.3.1-2.el6.noarch openstack-neutron-openvswitch-2013.2.1-1.el6.noarch openstack-neutron-2013.2.1-1.el6.noarch python-neutron-2013.2.1-1.el6.noarch [root at openstack2 ~]# rpm -qa | grep openstack openstack-cinder-2013.2.1-1.el6.noarch openstack-nova-common-2013.2.1-1.el6.noarch openstack-ceilometer-compute-2013.2.1-1.el6.noarch openstack-nova-scheduler-2013.2.1-1.el6.noarch openstack-neutron-openvswitch-2013.2.1-1.el6.noarch openstack-swift-plugin-swift3-1.7-1.el6.noarch openstack-keystone-2013.2.1-1.el6.noarch openstack-utils-2013.2-2.el6.noarch openstack-nova-api-2013.2.1-1.el6.noarch openstack-nova-compute-2013.2.1-1.el6.noarch openstack-nova-cert-2013.2.1-1.el6.noarch openstack-dashboard-2013.2.1-1.el6.noarch openstack-swift-account-1.10.0-2.el6.noarch openstack-swift-proxy-1.10.0-2.el6.noarch openstack-ceilometer-alarm-2013.2.1-1.el6.noarch openstack-nova-console-2013.2.1-1.el6.noarch openstack-nova-conductor-2013.2.1-1.el6.noarch openstack-neutron-2013.2.1-1.el6.noarch openstack-swift-container-1.10.0-2.el6.noarch openstack-ceilometer-central-2013.2.1-1.el6.noarch openstack-glance-2013.2-1.el6.noarch openstack-ceilometer-common-2013.2.1-1.el6.noarch openstack-nova-novncproxy-2013.2.1-1.el6.noarch openstack-swift-object-1.10.0-2.el6.noarch openstack-ceilometer-api-2013.2.1-1.el6.noarch openstack-packstack-2013.2.1-0.25.dev936.el6.noarch openstack-selinux-0.1.3-2.el6ost.noarch python-django-openstack-auth-1.1.2-1.el6.noarch openstack-swift-1.10.0-2.el6.noarch openstack-ceilometer-collector-2013.2.1-1.el6.noarch Boris. > Date: Mon, 23 Dec 2013 23:03:54 +0000 > From: pbrady at redhat.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > Subject: Re: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 > > On 12/23/2013 12:02 PM, Boris Derzhavets wrote: > > Original document http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/ > > contains the sample of answer file. In my situation p37p1 is eth0 p4p1 is eth1 and I've tried to make a substitution in answer-file . packstack fails with error : > > > Applying 192.168.1.137_neutron.pp > > [ ERROR ] > > > > ERROR : Error appeared during Puppet run: 192.168.1.127_neutron.pp > > Error: Validate method failed for class sleep: implicit argument passing of super from method defined by define_method() is not supported. Specify all arguments explicitly. > > What are the versions of packstack and puppet on your centos and Fedora 19 systems? > They should be the same if going from a pristine setup, but if not it would > be worth trying to match the F19 versions to the working centos one. > > If they are in fact the same then it must come down to different > logic in the puppet modules for Fedora and EL systems, which we'd > need to look into. > > You might get more indication of the particular error in: > /var/tmp/packstack/...neutron...log > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Dec 27 09:51:08 2013 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 27 Dec 2013 10:51:08 +0100 Subject: [Rdo-list] Attempt to reproduce Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN by Andrew Lau on F19 In-Reply-To: References: <52B8C15A.9080806@redhat.com>, , Message-ID: <52BD4D8C.1080805@redhat.com> [. . .] > > View also http://kashyapc.wordpress.com/tag/fedora/ > Set up a bit different from yours , but with same idea Controller + Compute, is done > manually not via packstack with f20 core ( no Ethernet interfaces renaming just ifcfg-eth0 and etc ) Here are more updated configurations: http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt That's the manual configuration details (not *fully* polished, but should give you an idea. http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt And, that's the set-up: - Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling). - Compute node: Nova (nova-compute), Neutron (openvswitch-agent) Hope that helps. -- /kashyap From pmyers at redhat.com Sun Dec 29 22:23:05 2013 From: pmyers at redhat.com (Perry Myers) Date: Sun, 29 Dec 2013 17:23:05 -0500 Subject: [Rdo-list] devstack on F20 (cinder: No suitable rootwrap found) Message-ID: <52C0A0C9.4030809@redhat.com> It looks like devstack hasn't been completely vetted on F20 yet (when I ran it it printed out a warning message and asked me to override w/ FORCE=yes), but I went ahead and decided to try it anyhow. > ++ get_rootwrap_location cinder > ++ local module=cinder > +++ get_python_exec_prefix > +++ is_fedora > +++ [[ -z Fedora ]] > +++ '[' Fedora = Fedora ']' > +++ echo /usr/bin > ++ echo /usr/bin/cinder-rootwrap > + CINDER_ROOTWRAP=/usr/bin/cinder-rootwrap > + [[ ! -x /usr/bin/cinder-rootwrap ]] > ++ get_rootwrap_location oslo > ++ local module=oslo > +++ get_python_exec_prefix > +++ is_fedora > +++ [[ -z Fedora ]] > +++ '[' Fedora = Fedora ']' > +++ echo /usr/bin > ++ echo /usr/bin/oslo-rootwrap > + CINDER_ROOTWRAP=/usr/bin/oslo-rootwrap > + [[ ! -x /usr/bin/oslo-rootwrap ]] > + die 180 'No suitable rootwrap found.' > + local exitcode=0 > + set +o xtrace > [Call Trace] > ./stack.sh:686:configure_cinder > /home/admin/devstack/lib/cinder:180:die > [ERROR] /home/admin/devstack/lib/cinder:180 No suitable rootwrap found. I looked further up in the stack and saw some compilation errors related to some compilation that requires libxml2-devel and libxslt-devel. Turned out that I didn't have libxslt-devel installed on the VM I was running this on. Installed that, reran stack.sh and it seemed to get further. Should we be checking for the presence of that package and installing it in devstack if it is not already installed? (fwiw, I don't have a clue as to why the failed gcc compile due to missing devel libs would result in a missing cinder rootwrap, perhaps someone can explain?) :) As an aside, I also gave devstack on RHEL 6.5 a run and that seemed to go fine as far as my cursory checking could tell (using nova networking) Glad to see that devstack/RHEL is not falling over :) From pmyers at redhat.com Mon Dec 30 04:38:52 2013 From: pmyers at redhat.com (Perry Myers) Date: Sun, 29 Dec 2013 23:38:52 -0500 Subject: [Rdo-list] devstack on F20 (cinder: No suitable rootwrap found) In-Reply-To: <52C0A0C9.4030809@redhat.com> References: <52C0A0C9.4030809@redhat.com> Message-ID: <52C0F8DC.6040002@redhat.com> On 29/12/13 17:23, Perry Myers wrote: > It looks like devstack hasn't been completely vetted on F20 yet (when I > ran it it printed out a warning message and asked me to override w/ > FORCE=yes), but I went ahead and decided to try it anyhow. Ok, consistently getting past the cinder rootwrap missing issue now that I am installing libxslt-devel manually. But now I'm completely hung on: > nova --os-password XXXXXXXX --os-username admin --os-tenant-name admin x509-create-cert /home/admin/devstack/accrc/admin/admin-pk.pem /home/admin/devstack/accrc/admin/admin-cert.pem the vm is mostly idle as far as I can see. top shows nova-api at the top of the list, but only 1 or 2% CPU utilization Hm. I guess not surprising, given this in nova-api logs: > 2013-12-29 22:59:35.760 INFO nova.openstack.common.rpc.common [req-832266ce-9d66-41e2-ac03-3376b383f2ab admin admin] Reconnecting to AMQP server on localhost:5672 > 2013-12-29 22:59:38.769 ERROR nova.openstack.common.rpc.common [req-832266ce-9d66-41e2-ac03-3376b383f2ab admin admin] AMQP server on localhost:5672 is unreachable: Socket closed. Trying again in 30 seconds. Ok, rabbitmq is running and is listening on port 5672 > [admin at fm ~]$ sudo lsof -i tcp -n -P | grep LISTEN | grep 5672 > beam.smp 14278 rabbitmq 16u IPv6 80540 0t0 TCP *:5672 (LISTEN) And it is responding to incoming traffic (telnet localhost 5672 resulted in a connection) Well, checking the rest of the service logs, all of them are unable to connect to the rabbitmq service. So it's not just a nova problem. Restarting the rabbitmq-server.service doesn't seem to allow the other services (like nova-api) to reconnect to the rabbit server. Nor does restarting each of the individual OpenStack services Looks like a credential issue: > =ERROR REPORT==== 29-Dec-2013::23:54:52 === > closing AMQP connection <0.1336.0> (127.0.0.1:36009 -> 127.0.0.1:5672): > {handshake_error,starting,0, > {amqp_error,access_refused, > "AMQPLAIN login refused: user 'guest' - invalid credentials", > 'connection.start_ok'}} In devstack you are prompted for the RabbitMQ password... I have tried entering a password for this manually (i.e. smth like 'password') and I've tried leaving it blank, in which case devstack will randomly generate a password for you. In both cases, the password does propagate to the proper config files (like nova.conf). So... I manually ran: > [root at fm log]# rabbitmqctl change_password guest XXXXXXXX And that cleared everything up. So it seems like devstack is not setting the guest password for rabbitmq correctly? Ok, it's trying to and failing: > + echo_summary 'Starting RabbitMQ' > + [[ -t 3 ]] > + echo -e Starting RabbitMQ > Starting RabbitMQ > + is_fedora > + [[ -z Fedora ]] > + '[' Fedora = Fedora ']' > + restart_service rabbitmq-server > + is_ubuntu > + [[ -z rpm ]] > + '[' rpm = deb ']' > + sudo /sbin/service rabbitmq-server restart > Redirecting to /bin/systemctl restart rabbitmq-server.service > + sudo rabbitmqctl change_password guest password > Changing password for user "guest" ... > Error: unable to connect to node rabbit at fm: nodedown > > DIAGNOSTICS > =========== > > nodes in question: [rabbit at fm] > > hosts, their running nodes and ports: > - fm: [{rabbitmqctl14804,38208}] > > current node details: > - node name: rabbitmqctl14804 at fm > - home dir: /var/lib/rabbitmq > - cookie hash: c13n3S7iTV2JbeQLNCLWvQ== > The above rabbitmqctl command is called from lib/rpc_backend and is done right after a rabbitmq restart. I found that I had to restart the rabbitmq server manually myself w/ systemctl restart rabbitmq-server.service before rabbitmqctl commands would work properly So this doesn't seem like a race, but it does appear that the service restart being done in the restart_rpc_backend function is not working properly. It's restarting the service, but the service doesn't come back up properly. I also tried putting a second restart in the restart function along with various sleeps. It seems like there is something specific about the restart being issued from stack.sh... Anyone else seeing this? Perry From flavio at redhat.com Mon Dec 30 14:16:26 2013 From: flavio at redhat.com (Flavio Percoco) Date: Mon, 30 Dec 2013 15:16:26 +0100 Subject: [Rdo-list] devstack on F20 (cinder: No suitable rootwrap found) In-Reply-To: <52C0A0C9.4030809@redhat.com> References: <52C0A0C9.4030809@redhat.com> Message-ID: <20131230141626.GA5754@redhat.com> On 29/12/13 17:23 -0500, Perry Myers wrote: >It looks like devstack hasn't been completely vetted on F20 yet (when I >ran it it printed out a warning message and asked me to override w/ >FORCE=yes), but I went ahead and decided to try it anyhow. > >> ++ get_rootwrap_location cinder >> ++ local module=cinder >> +++ get_python_exec_prefix >> +++ is_fedora >> +++ [[ -z Fedora ]] >> +++ '[' Fedora = Fedora ']' >> +++ echo /usr/bin >> ++ echo /usr/bin/cinder-rootwrap >> + CINDER_ROOTWRAP=/usr/bin/cinder-rootwrap >> + [[ ! -x /usr/bin/cinder-rootwrap ]] >> ++ get_rootwrap_location oslo >> ++ local module=oslo >> +++ get_python_exec_prefix >> +++ is_fedora >> +++ [[ -z Fedora ]] >> +++ '[' Fedora = Fedora ']' >> +++ echo /usr/bin >> ++ echo /usr/bin/oslo-rootwrap >> + CINDER_ROOTWRAP=/usr/bin/oslo-rootwrap >> + [[ ! -x /usr/bin/oslo-rootwrap ]] >> + die 180 'No suitable rootwrap found.' >> + local exitcode=0 >> + set +o xtrace >> [Call Trace] >> ./stack.sh:686:configure_cinder >> /home/admin/devstack/lib/cinder:180:die >> [ERROR] /home/admin/devstack/lib/cinder:180 No suitable rootwrap found. > >I looked further up in the stack and saw some compilation errors related >to some compilation that requires libxml2-devel and libxslt-devel. > >Turned out that I didn't have libxslt-devel installed on the VM I was >running this on. Installed that, reran stack.sh and it seemed to get >further. > >Should we be checking for the presence of that package and installing it >in devstack if it is not already installed? This is weird, though. We have libxml2-devel as a 'general' dependency in devstack[0]. I checked and the package is still there. I'll give this a try on a fresh f20. [0] https://github.com/openstack-dev/devstack/blob/master/files/rpms/general#L10 >(fwiw, I don't have a clue as to why the failed gcc compile due to >missing devel libs would result in a missing cinder rootwrap, perhaps >someone can explain?) :) Oslo's incubator install failed because of the missing lxml dependency, which resulted in the oslo-rootwrap entry_point not being created. This caused cinder configuration process to fail after trying to use the not existent oslo-rootwrap bin. Cheers, FF -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From pmyers at redhat.com Mon Dec 30 17:12:53 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 30 Dec 2013 12:12:53 -0500 Subject: [Rdo-list] devstack on F20 (cinder: No suitable rootwrap found) In-Reply-To: <20131230141626.GA5754@redhat.com> References: <52C0A0C9.4030809@redhat.com> <20131230141626.GA5754@redhat.com> Message-ID: <52C1A995.8090903@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30/12/13 09:16, Flavio Percoco wrote: > On 29/12/13 17:23 -0500, Perry Myers wrote: >> It looks like devstack hasn't been completely vetted on F20 yet >> (when I ran it it printed out a warning message and asked me to >> override w/ FORCE=yes), but I went ahead and decided to try it >> anyhow. >> >>> ++ get_rootwrap_location cinder ++ local module=cinder +++ >>> get_python_exec_prefix +++ is_fedora +++ [[ -z Fedora ]] +++ >>> '[' Fedora = Fedora ']' +++ echo /usr/bin ++ echo >>> /usr/bin/cinder-rootwrap + >>> CINDER_ROOTWRAP=/usr/bin/cinder-rootwrap + [[ ! -x >>> /usr/bin/cinder-rootwrap ]] ++ get_rootwrap_location oslo ++ >>> local module=oslo +++ get_python_exec_prefix +++ is_fedora +++ >>> [[ -z Fedora ]] +++ '[' Fedora = Fedora ']' +++ echo /usr/bin >>> ++ echo /usr/bin/oslo-rootwrap + >>> CINDER_ROOTWRAP=/usr/bin/oslo-rootwrap + [[ ! -x >>> /usr/bin/oslo-rootwrap ]] + die 180 'No suitable rootwrap >>> found.' + local exitcode=0 + set +o xtrace [Call Trace] >>> ./stack.sh:686:configure_cinder >>> /home/admin/devstack/lib/cinder:180:die [ERROR] >>> /home/admin/devstack/lib/cinder:180 No suitable rootwrap >>> found. >> >> I looked further up in the stack and saw some compilation errors >> related to some compilation that requires libxml2-devel and >> libxslt-devel. >> >> Turned out that I didn't have libxslt-devel installed on the VM I >> was running this on. Installed that, reran stack.sh and it >> seemed to get further. >> >> Should we be checking for the presence of that package and >> installing it in devstack if it is not already installed? > > This is weird, though. We have libxml2-devel as a 'general' > dependency in devstack[0]. I checked and the package is still > there. I'll give this a try on a fresh f20. It wasn't libxml2-devel that was missing. It was libxslt-devel that was missing. Though I do see both libxml2-devel AND libxslt-devel in the general dependencies list. I wonder why it's not being installed... > +++ for line in '$(<${fname})' +++ [[ libxml2-devel # dist:rhel6 > [2] =~ NOPRIME ]] +++ package='libxml2-devel ' +++ inst_pkg=1 +++ > [[ libxml2-devel # dist:rhel6 [2] =~ (.*)#.*dist:([^ ]*) ]] +++ > package='libxml2-devel ' +++ distros=rhel6 +++ [[ ! rhel6 =~ f20 > ]] +++ inst_pkg=0 +++ [[ libxml2-devel # dist:rhel6 [2] =~ > (.*)#.*testonly.* ]] +++ [[ 0 = 1 ]] +++ for line in > '$(<${fname})' +++ [[ libxslt-devel # dist:rhel6 [2] =~ NOPRIME ]] > +++ package='libxslt-devel ' +++ inst_pkg=1 +++ [[ libxslt-devel # > dist:rhel6 [2] =~ (.*)#.*dist:([^ ]*) ]] +++ package='libxslt-devel > ' +++ distros=rhel6 +++ [[ ! rhel6 =~ f20 ]] +++ inst_pkg=0 +++ [[ > libxslt-devel # dist:rhel6 [2] =~ (.*)#.*testonly.* ]] +++ [[ 0 = 1 > ]] Hm, it looks like it's not being installed because the dependency list says: libxml2-devel # dist:rhel6 [2] libxslt-devel # dist:rhel6 [2] And I think this means that these deps should only be installed on rhel6 machines. Perhaps we need to remove dist:rhel6 from those lines? > [0] > https://github.com/openstack-dev/devstack/blob/master/files/rpms/general#L10 > > > >> (fwiw, I don't have a clue as to why the failed gcc compile due >> to missing devel libs would result in a missing cinder rootwrap, >> perhaps someone can explain?) :) > > Oslo's incubator install failed because of the missing lxml > dependency, which resulted in the oslo-rootwrap entry_point not > being created. This caused cinder configuration process to fail > after trying to use the not existent oslo-rootwrap bin. Ah, thanks for the explanation :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlLBqZQACgkQxdKLkeZeTz2uIACbB6t+WsvSjNv/6ajvHH00stc8 9S4An0nKYRj2R4MuliJ2ikWRUCGRYw7I =7nSQ -----END PGP SIGNATURE-----