From ak at cloudssky.com Thu May 1 15:19:06 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Thu, 1 May 2014 17:19:06 +0200 Subject: [Rdo-list] RHEL 7 rc cloud guest image available In-Reply-To: References: <53600E74.10205@redhat.com> Message-ID: Sorry Rich, I wrote inadvertently to your redhat email. The atomic image works now (I deleted the image and added it again to glance and it worked suddenly). But I'm still curious how to get docker running on RHEL 7 RC. Thanks, -Arash On Thu, May 1, 2014 at 4:57 PM, Arash Kaffamanesh wrote: > Rich, thanks for sharing! > > Just added the guest image to glance on RDO Havana and ran an instance > successfully and thought I'd get docker pre installed on it and since that > was not the case googled a bit and found this: > > https://access.redhat.com/site/discussions/734363 > > It seems some kind of registration or pinging the TAM is required and > since I've no TAM and don't know how to become one tried to test the atomic > image here: > > http://www.projectatomic.io/download/ > > and saw RDO is supported, yuppi :-) > > Added the image to glance with: > > glance add name="Atomic" is_public=true container_format=bare > disk_format=qcow2 < 20140414.1.qcow2 > > and got the error status after launching the instance. > > So now my question is: what is the right way to test docker on RHEL 7 or > on an Atomic image on RDO? > > Thanks, > -Arash > > > > > On Tue, Apr 29, 2014 at 10:41 PM, Rich Bowen wrote: > >> FYI - RHEL 7 guest image available: ftp://ftp.redhat.com/redhat/ >> rhel/rc/7/GuestImage/ >> >> --Rich >> >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://openstack.redhat.com/ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Fri May 2 08:59:56 2014 From: flavio at redhat.com (Flavio Percoco) Date: Fri, 2 May 2014 10:59:56 +0200 Subject: [Rdo-list] Glance problems... In-Reply-To: <53584F5F.6000004@soe.ucsc.edu> References: <53584F5F.6000004@soe.ucsc.edu> Message-ID: <20140502085956.GA18347@redhat.com> On 23/04/14 16:40 -0700, Erich Weiler wrote: >Hi Y'all, > >I was able to set up RDO Openstack just fine with Icehouse RC1, and >then I wiped it out and am trying again with the official stable >release (2014.1) and am having weird problems. It seems there were >many changes between this and RC1 unless I'm mistaken. > >The main issue I'm having now is that I can't seem to create the >glance database properly, and I was able to do this before no problem. >I do: > >$ mysql -u root -p >mysql> CREATE DATABASE glance; >mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ >IDENTIFIED BY 'GLANCE_DBPASS'; >mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ >IDENTIFIED BY 'GLANCE_DBPASS'; > >(Obviously 'GLANCE_DBPASS' is replaced with the real password). > >Then: > >su -s /bin/sh -c "glance-manage db_sync" glance > >And it creates the 'glance' database and only one table, >"migrate_version". I can't get it to create the rest of the tables it >needs. I've tried also: > >openstack-db --init --service glance --password GLANCE_DBPASS > >And that returned success but in reality nothing happened... Any idea >what's going on? > >In the api.conf and registry.conf the correct database credentials are >listed, and I can connect to the database as the mysql glance user on >the command line just fine using those credentials. Something wrong is happening with glance's database initialization, as you already mentioned. Does it throw an error? Could you please try running just: $ glance-manage db_sync [snip] -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Sat May 3 00:21:21 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 2 May 2014 20:21:21 -0400 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <535D6B99.8070402@redhat.com> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> <535D6217.40303@redhat.com> <535D6B99.8070402@redhat.com> Message-ID: <20140503002121.GA22559@redhat.com> On Sun, Apr 27, 2014 at 09:42:01PM +0100, P?draig Brady wrote: > uncommented in the file. So I'm guessing that you had > an existing /etc/neutron/neutron.conf file, than wasn't > updated when you upgraded to the latest neutron package, That seems like it's worth a bz, since it's going to bite anyone doing an upgrade: https://bugzilla.redhat.com/show_bug.cgi?id=1093876 > On a more general note, we should be aiming to provide only > commented values in the default /etc/neutron/neutron.conf, with > explicit values in /usr/share/neutron/neutron-dist.conf Would adding lock_path to neutron-dist.conf be the correct solution here? -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Mon May 5 13:39:48 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 05 May 2014 09:39:48 -0400 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter: May 2014 Message-ID: <536794A4.8060005@redhat.com> Thanks for being part of the RDO community! Be sure to tell your friends and colleagues to sign up, at http://www.redhat.com/mailman/listinfo/rdo-newsletter We'd also like to hear your thoughts on how we can improve the newsletter in the coming months. There's a very quick survey (2-5 minutes) at http://tm3.org/rdosurvey Thanks! OpenStack Summit Of course the big news in May is the OpenStack Summit, Juno edition, which will take place in Atlanta Georgia in next week. You can see the great lineup of talks and events at http://openstack.org/summit and it's still not too late to register. You'll have the chance to meet many of the RDO engineers, and attend 20+ talks by people from the Red Hat OpenStack team - http://tm3.org/rdosummit On Wednesday night, come have a drink on us at the World of Coca-Cola, and rub elbows with the RDO team over hors d'oeuvres - http://tm3.org/summitparty And we'll be in the Red Hat booth all day, doing demos of RDO, as well as Red Hat Enterprise Linux OpenStack Platform. Drop by for the Booth Crawl on Monday evening to tell us what you're doing with OpenStack. LinuxCon Tokyo On May 20-22, the week after the OpenStack Summit, we'll be in Tokyo for LinuxCon Japan, Gluster Community Day, and CloudOpen Japan. Stop by the Red Hat booth to find out about the Icehouse release of RDO, and what coming in the Juno release later this year. Icehouse Release OpenStack Icehouse released on April 17th http://openstack.org/icehouse/ - and it's better than ever, with 350 new features contributed by more than 1,200 developers from around the world. To get started using OpenStack Icehouse today, go to the RDO QuickStart page at http://openstack.redhat.com/Quickstart and follow the steps. If you're already running an earlier version of OpenStack, the upgrading doc at http://openstack.redhat.com/Upgrading_RDO_To_Icehouse is the way to go. (See https://www.redhat.com/archives/rdo-list/2014-April/msg00105.html for the more detailed list of what's in those packages.) Over the coming months, we'll be hosting hangouts to talk about what's new in Icehouse, and we started with a presentation by Steve Baker about what's new in Heat. If you missed it, you can still watch it at https://plus.google.com/u/1/events/ckhqrki6iepg12vkqk5vnt7ijd0 And in May, we're going to have a hangout about TripleO, and advances in OpenStack's ease of installation and deployment. Follow us on Twitter - @rdocommunity - to find out exact date and time information. New HowTos and Wiki Articles The RDO engineers have been really busy the last few months working on the Icehouse code. With the code freeze leading up to the release, some of them had a moment to catch up on their writing, and we've gotten some great content showing up in the weeks since the release. The following new HowTo articles have been added to the RDO wiki: * TripleO VM Setip - http://openstack.redhat.com/TripleO_VM_Setup * Using MariDB+Galera Cluster with RDO Havana - http://openstack.redhat.com/TripleO_VM_Setup * Upgrading RDO To Icehouse - http://openstack.redhat.com/Upgrading_RDO_To_Icehouse * Deploying RDO using Instack - http://openstack.redhat.com/Deploying_RDO_using_Instack * Setting up High Availability - http://openstack.redhat.com/Setting_up_High_Availability If you're interested in contributing to the RDO documentation, just roll up your sleeves and get started - It's a wiki. Register at http://openstack.redhat.com/forum/entry/register and start writing. Or, if you prefer to write it somewhere else, just drop me a note at rbowen at redhat.com and we'll link to it. Welcome Ceph! Last week we announced that Red Hat has agreed to acquire Inktank, the provider of the Ceph filesystem. You can read the full press release at http://tm3.org/redhatinktank RDO welcomes Ceph to the Red Hat storage family, and we're looking forward to working more closely with the Ceph community in the coming years. Stay in Touch Thanks for reading this far. We really want to know if the newsletter is worth your time, whether you read it at all, and what we can do to make it more useful. Please consider filling out the (very short!) survey at http://tm3.org/rdosurvey to help us make it better. Thanks! Meanwhile, the best ways to keep up with what's going on in the RDO community are: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter Thanks again for being part of the RDO community! -- Rich Bowen, for the RDO team http://community.redhat.com/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From dfv at eurotux.com Mon May 5 14:30:12 2014 From: dfv at eurotux.com (Diogo Vieira) Date: Mon, 5 May 2014 15:30:12 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster Message-ID: Hello, I have a 4 node Swift cluster (1 Proxy Node and 3 Storage Nodes) to which I would like to add a 4th Storage Node. I've tried two different ways to add it. The first one was manually and the second one was with packstack (which in itself had some other problems described here[1], if you'd like to help - I worked around the problem with a symbolic link of the swift-ring-builder to the true binary). With both methods, as soon as I add the 4th Storage Node I start getting errors in syslog coming from rsync like: > May 5 13:48:55 host-10-10-6-28 object-replicator: rsync: recv_generator: mkdir "/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a" (in object) failed: Permission denied (13) > May 5 13:48:55 host-10-10-6-28 object-replicator: *** Skipping any contents from this failed directory *** > May 5 13:48:55 host-10-10-6-28 object-replicator: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0pre1] > May 5 13:48:55 host-10-10-6-28 object-replicator: Bad rsync return code: 23 <- ['rsync', '--recursive', '--whole-file', '--human-readable', '--xattrs', '--itemize-changes', '--ignore-existing', '--timeout=30', '--contimeout=30', '--bwlimit=0', '/srv/node/device1/objects/134106/f2a', '10.10.6.30::object/device3/objects/134106'] or > May 5 13:48:23 host-10-10-6-28 object-replicator: rsync: recv_generator: failed to stat "/device2/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data" (in object): Permission denied (13) > May 5 13:48:23 host-10-10-6-28 object-replicator: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0pre1] > May 5 13:48:23 host-10-10-6-28 object-replicator: Bad rsync return code: 23 <- ['rsync', '--recursive', '--whole-file', '--human-readable', '--xattrs', '--itemize-changes', '--ignore-existing', '--timeout=30', '--contimeout=30', '--bwlimit=0', '/srv/node/device1/objects/134106/f2a', '10.10.6.29::object/device2/objects/134106'] Please note that I got these messages on one of the older Storage Nodes while on the newly added one I get some errors like these: > May 5 14:17:44 host-10-10-6-34 object-auditor: ERROR Trying to audit /srv/node/device4/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 173, in failsafe_object_audit#012 self.object_audit(location)#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 191, in object_audit#012 with df.open():#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1025, in open#012 data_file, meta_file, ts_file = self._get_ondisk_file()#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1114, in _get_ondisk_file#012 "Error listing directory %s: %s" % (self._datadir, err))#012DiskFileError: Error listing directory /srv/node/device4/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a: [Errno 13] Permission denied: '/srv/node/device4/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a' > May 5 14:17:44 host-10-10-6-34 object-auditor: ERROR Trying to audit /srv/node/device4/objects/215176/306/d222240f67449968145f65edd48ad306: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 173, in failsafe_object_audit#012 self.object_audit(location)#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 191, in object_audit#012 with df.open():#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1025, in open#012 data_file, meta_file, ts_file = self._get_ondisk_file()#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1114, in _get_ondisk_file#012 "Error listing directory %s: %s" % (self._datadir, err))#012DiskFileError: Error listing directory /srv/node/device4/objects/215176/306/d222240f67449968145f65edd48ad306: [Errno 13] Permission denied: '/srv/node/device4/objects/215176/306/d222240f67449968145f65edd48ad306' > May 5 14:17:44 host-10-10-6-34 object-auditor: ERROR Trying to audit /srv/node/device4/objects/79135/961/4d47ee46560698a7a938d225a2357961: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 173, in failsafe_object_audit#012 self.object_audit(location)#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 191, in object_audit#012 with df.open():#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1025, in open#012 data_file, meta_file, ts_file = self._get_ondisk_file()#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1114, in _get_ondisk_file#012 "Error listing directory %s: %s" % (self._datadir, err))#012DiskFileError: Error listing directory /srv/node/device4/objects/79135/961/4d47ee46560698a7a938d225a2357961: [Errno 13] Permission denied: '/srv/node/device4/objects/79135/961/4d47ee46560698a7a938d225a2357961' > May 5 14:17:44 host-10-10-6-34 object-auditor: Object audit (ZBF) "forever" mode completed: 0.06s. Total quarantined: 0, Total errors: 3, Total files/sec: 52.40, Total bytes/sec: 0.00, Auditing time: 0.06, Rate: 0.98 as well as: > May 5 14:21:27 host-10-10-6-34 xinetd[519]: START: rsync pid=7319 from=10.10.6.28 > May 5 14:21:27 host-10-10-6-34 rsyncd[7319]: name lookup failed for 10.10.6.28: Name or service not known > May 5 14:21:27 host-10-10-6-34 rsyncd[7319]: connect from UNKNOWN (10.10.6.28) > May 5 14:21:27 host-10-10-6-34 rsyncd[7319]: rsync to object/device4/objects/134106 from UNKNOWN (10.10.6.28) > May 5 14:21:27 host-10-10-6-34 rsyncd[7319]: receiving file list > May 5 14:21:27 host-10-10-6-34 rsyncd[7319]: rsync: recv_generator: mkdir "/device4/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a" (in object) failed: Permission denied (13) > May 5 14:21:27 host-10-10-6-34 rsyncd[7319]: *** Skipping any contents from this failed directory *** > May 5 14:21:28 host-10-10-6-34 rsyncd[7319]: sent 234 bytes received 243 bytes total size 1,048,576 > May 5 14:21:28 host-10-10-6-34 xinetd[519]: EXIT: rsync status=0 pid=7319 duration=1(sec) I have no idea if they're related. I've checked and the swift user owns the /srv/node folders. When I manually added the node I really thought there was a problem of mine but when I got the same problem with packstack I had no idea of what the cause could be. Can someone help me? [1]: https://ask.openstack.org/en/question/28776/packstack-aborting-installation-because-the-ring-refuses-to-rebalance/ Thank you in advance, Diogo Vieira Programador Eurotux Inform?tica, S.A. | www.eurotux.com (t) +351 253 680 300 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Tue May 6 01:53:01 2014 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Tue, 06 May 2014 02:53:01 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: References: Message-ID: <5368407D.5020907@redhat.com> What version of swift are you using? swift-1.13.1.rc2 could have permissions errors, while we included a patch in the RDO icehouse swift-1.13.1 release to fix http://pad.lv/1302700 which on first glance could be related? thanks, P?draig. From pbrady at redhat.com Tue May 6 01:56:52 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 May 2014 02:56:52 +0100 Subject: [Rdo-list] Open vSwitch issues.... In-Reply-To: <20140503002121.GA22559@redhat.com> References: <535AAE81.7060202@soe.ucsc.edu> <535AB37B.1000502@soe.ucsc.edu> <535D5A0B.4000009@soe.ucsc.edu> <535D6217.40303@redhat.com> <535D6B99.8070402@redhat.com> <20140503002121.GA22559@redhat.com> Message-ID: <53684164.8080400@redhat.com> On 05/03/2014 01:21 AM, Lars Kellogg-Stedman wrote: > On Sun, Apr 27, 2014 at 09:42:01PM +0100, P?draig Brady wrote: >> uncommented in the file. So I'm guessing that you had >> an existing /etc/neutron/neutron.conf file, than wasn't >> updated when you upgraded to the latest neutron package, > > That seems like it's worth a bz, since it's going to bite anyone doing > an upgrade: https://bugzilla.redhat.com/show_bug.cgi?id=1093876 > >> On a more general note, we should be aiming to provide only >> commented values in the default /etc/neutron/neutron.conf, with >> explicit values in /usr/share/neutron/neutron-dist.conf > > Would adding lock_path to neutron-dist.conf be the correct solution > here? Note one doesn't hit this when using puppet mechanisms to rewrite config, or standard upgrade procedures of merging in rpmnew files. Now it's best avoid the issue entirely as you say and those changes should already be in place in the git repos and will be included in the next refresh. thanks, P?draig. From elias.moreno.tec at gmail.com Tue May 6 02:52:26 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Mon, 5 May 2014 22:22:26 -0430 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse Message-ID: Hello all, I would like to know what's the current state of auto resizing the root partition in current RDO Icehouse, more specifically, CentOS and Fedora images. I've read many versions of the story so I'm not really sure what works and what doesn't. For instance, I've read that currently, auto resizing of a CentOS 6.5 image for would require the filesystem to be ext3 and I've also read that auto resizing currently works only with kernels >= 3.8, so what's really the deal with this currently? Also, it's as simple as having cloud-init, dracut-modules-growroot and cloud-initramfs-tools installed on the image or are there any other steps required for the auto resizing to work? Thanks in advance! -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shake.chen at gmail.com Tue May 6 08:32:12 2014 From: shake.chen at gmail.com (Shake Chen) Date: Tue, 6 May 2014 16:32:12 +0800 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: Message-ID: I have tested the resize feature. the partition can not use lvm, you can use ext4, no need swap. On Tue, May 6, 2014 at 10:52 AM, El?as David wrote: > Hello all, > > I would like to know what's the current state of auto resizing the root > partition in current RDO Icehouse, more specifically, CentOS and Fedora > images. > > I've read many versions of the story so I'm not really sure what works and > what doesn't. > > For instance, I've read that currently, auto resizing of a CentOS 6.5 > image for would require the filesystem to be ext3 and I've also read that > auto resizing currently works only with kernels >= 3.8, so what's really > the deal with this currently? > > Also, it's as simple as having cloud-init, dracut-modules-growroot and > cloud-initramfs-tools installed on the image or are there any other steps > required for the auto resizing to work? > > Thanks in advance! > > -- > El?as David. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue May 6 08:39:01 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 6 May 2014 14:09:01 +0530 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: Message-ID: <20140506083901.GA12668@tesla.redhat.com> On Mon, May 05, 2014 at 10:22:26PM -0430, El?as David wrote: > Hello all, > > I would like to know what's the current state of auto resizing the root > partition in current RDO Icehouse, more specifically, CentOS and Fedora > images. > > I've read many versions of the story so I'm not really sure what works and > what doesn't. > > For instance, I've read that currently, auto resizing of a CentOS 6.5 image > for would require the filesystem to be ext3 and I've also read that auto > resizing currently works only with kernels >= 3.8, so what's really the > deal with this currently? > > Also, it's as simple as having cloud-init, dracut-modules-growroot and > cloud-initramfs-tools installed on the image or are there any other steps > required for the auto resizing to work? I personally find[1] virt-resize (which works the same way on any images) very useful when I'd like to do resizing, as it works consistent well. I just tried on a Fedora 20 qcow2 cloud image with these below four commands and their complete output. 1. Examine the root filesystem size _inside_ the cloud image: $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 Name Type VFS Label MBR Size Parent /dev/sda1 filesystem ext4 _/ - 1.9G - /dev/sda1 partition - - 83 1.9G /dev/sda /dev/sda device - - - 2.0G - 2. Create a new qcow2 disk of 10G: $ qemu-img create -f qcow2 -o preallocation=metadata \ newdisk.qcow2 10G 3. Perform the resize operation: $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** Summary of changes: /dev/sda1: This partition will be resized from 1.9G to 10.0G. The filesystem ext4 on /dev/sda1 will be expanded using the 'resize2fs' method. ********** Setting up initial partition table on newdisk.qcow2 ... Copying /dev/sda1 ... 100% ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... Resize operation completed with no errors. Before deleting the old disk, carefully check that the resized disk boots and works correctly. 4. Examine the root file system size in the new disk (should reflect correctly): $ virt-filesystems --long --all -h -a newdisk.qcow2 Name Type VFS Label MBR Size Parent /dev/sda1 filesystem ext4 _/ - 10G - /dev/sda1 partition - - 83 10G /dev/sda /dev/sda device - - - 10G - Hope that helps. [1] http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ -- /kashyap From dfv at eurotux.com Tue May 6 08:47:34 2014 From: dfv at eurotux.com (Diogo Vieira) Date: Tue, 6 May 2014 09:47:34 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: <5368407D.5020907@redhat.com> References: <5368407D.5020907@redhat.com> Message-ID: On May 6, 2014, at 2:53 AM, P?draig Brady wrote: > What version of swift are you using? > > swift-1.13.1.rc2 could have permissions errors, > while we included a patch in the RDO icehouse swift-1.13.1 release to fix > http://pad.lv/1302700 which on first glance could be related? > > thanks, > P?draig. I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: # yum info openstack-swift Loaded plugins: priorities 196 packages excluded due to repository priority protections Installed Packages Name : openstack-swift Arch : noarch Version : 1.13.1 Release : 1.fc21 So the fix should already be present right? Thank you, Diogo Vieira -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Tue May 6 10:06:53 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 May 2014 11:06:53 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: References: <5368407D.5020907@redhat.com> Message-ID: <5368B43D.2080903@redhat.com> On 05/06/2014 09:47 AM, Diogo Vieira wrote: > On May 6, 2014, at 2:53 AM, P?draig Brady > wrote: > >> What version of swift are you using? >> >> swift-1.13.1.rc2 could have permissions errors, >> while we included a patch in the RDO icehouse swift-1.13.1 release to fix >> http://pad.lv/1302700 which on first glance could be related? >> >> thanks, >> P?draig. > > I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: > > # yum info openstack-swift > Loaded plugins: priorities > 196 packages excluded due to repository priority protections > Installed Packages > Name : openstack-swift > Arch : noarch > Version : 1.13.1 > Release : 1.fc21 > > > So the fix should already be present right? Yes, must be something else so. thanks, P?draig. From maxence.dalmais at worldline.com Tue May 6 10:10:32 2014 From: maxence.dalmais at worldline.com (Dalmais Maxence) Date: Tue, 6 May 2014 12:10:32 +0200 Subject: [Rdo-list] Openstack-sahara broken on rhel6 Message-ID: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> Hi, The installation of Openstack-sahara on RHEL6 is broken while installing with RDO repositories. Current available version is 2014.1.0-3. Regarding the changelog , 2014.1.0-4 fix this : - Correcting bug with rhel6 init script - Adding local variable for rhel6 tests Currently EPEL testing use openstack-sahara-2014.1.0-6, while last version is openstack-sahara-2014.1.0-10 Do you have any plan to update quickly ? Maxence ________________________________ Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la responsabilit? de Worldline ne pourra ?tre recherch?e quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis. This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Tue May 6 11:26:12 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 May 2014 12:26:12 +0100 Subject: [Rdo-list] Openstack-sahara broken on rhel6 In-Reply-To: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> References: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> Message-ID: <5368C6D4.2030703@redhat.com> On 05/06/2014 11:10 AM, Dalmais Maxence wrote: > Hi, > > > > The installation of Openstack-sahara on RHEL6 is broken while installing with RDO repositories. > > Current available version is 2014.1.0-3. > > > > Regarding the changelog , 2014.1.0-4 fix this : - Correcting bug with rhel6 init script > > - Adding local variable for rhel6 tests > > > > Currently EPEL testing use openstack-sahara-2014.1.0-6, while last version is openstack-sahara-2014.1.0-10 Sorry for the confusion. Those packages should not have gone to EPEL testing, and will be removed soon. But yes, openstack-sahara-2014.1.0-3 is the latest available in the RDO repos. > Do you have any plan to update quickly ? openstack-sahara-2014.1.0-11.el6 is currently in testing and will go out today The current version being tested is at: http://copr-be.cloud.fedoraproject.org/results/jruzicka/rdo-icehouse-epel-6/epel-6-x86_64/openstack-sahara-2014.1.0-11.el6/ thanks, P?draig. From maxence.dalmais at worldline.com Tue May 6 12:01:29 2014 From: maxence.dalmais at worldline.com (Dalmais Maxence) Date: Tue, 6 May 2014 14:01:29 +0200 Subject: [Rdo-list] Openstack-sahara broken on rhel6 In-Reply-To: <5368C6D4.2030703@redhat.com> References: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> <5368C6D4.2030703@redhat.com> Message-ID: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2F1@FRVDX100.fr01.awl.atosorigin.net> Hi P?draig, Thanks for you quick answer. I successfully installed your last version of openstack-sahara with "yum install http://copr-be.cloud.fedoraproject.org/results/jruzicka/rdo-icehouse-epel-6/epel-6-x86_64/openstack-sahara-2014.1.0-11.el6/openstack-sahara-2014.1.0-11.el6.noarch.rpm" After this, what is the best way to upgrade python-sqlalchemy ? With 0.5 version coming from base, we have error " from sqlalchemy.orm import relationship ImportError: cannot import name relationship" Thanks, Maxence -----Message d'origine----- De : P?draig Brady [mailto:pbrady at redhat.com] Envoy? : mardi 6 mai 2014 13:26 ? : Dalmais Maxence Cc : rdo-list at redhat.com Objet : Re: [Rdo-list] Openstack-sahara broken on rhel6 On 05/06/2014 11:10 AM, Dalmais Maxence wrote: > Hi, > > > > The installation of Openstack-sahara on RHEL6 is broken while installing with RDO repositories. > > Current available version is 2014.1.0-3. > > > > Regarding the changelog , 2014.1.0-4 fix this : - Correcting bug with > rhel6 init script > > > - Adding local variable for rhel6 tests > > > > Currently EPEL testing use openstack-sahara-2014.1.0-6, while last > version is openstack-sahara-2014.1.0-10 Sorry for the confusion. Those packages should not have gone to EPEL testing, and will be removed soon. But yes, openstack-sahara-2014.1.0-3 is the latest available in the RDO repos. > Do you have any plan to update quickly ? openstack-sahara-2014.1.0-11.el6 is currently in testing and will go out today The current version being tested is at: http://copr-be.cloud.fedoraproject.org/results/jruzicka/rdo-icehouse-epel-6/epel-6-x86_64/openstack-sahara-2014.1.0-11.el6/ thanks, P?draig. Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la responsabilit? de Worldline ne pourra ?tre recherch?e quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis. This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted. From rbowen at redhat.com Tue May 6 13:29:20 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 06 May 2014 09:29:20 -0400 Subject: [Rdo-list] OpenStack Summit booth Message-ID: <5368E3B0.9040107@redhat.com> For those that are going to OpenStack Summit, a few details about the RDO booth. The Red Hat booth at summit is divided in half, with one half being RDO and other community things (oVirt, Atomic, Gluster, etc), and the other half being Red Hat commercial products, especially Red Hat Enterprise Linux OpenStack Platform. On our end of the table, we have several people who have said that they want to spend some time in the booth, and perhaps run various demos of the stuff they've been working on. If you're willing to spend some time in the booth, please let me or Mike Burns know, so that we can have decent coverage. In particular, Monday evening for the Booth Crawl is an important time to be sure we have knowledgeable people at the booth to answer questions. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Tue May 6 13:40:12 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 06 May 2014 09:40:12 -0400 Subject: [Rdo-list] OpenStack Summit booth In-Reply-To: <5368E3B0.9040107@redhat.com> References: <5368E3B0.9040107@redhat.com> Message-ID: <5368E63C.8040105@redhat.com> On 05/06/2014 09:29 AM, Rich Bowen wrote: > For those that are going to OpenStack Summit, a few details about the > RDO booth. > > The Red Hat booth at summit is divided in half, with one half being > RDO and other community things (oVirt, Atomic, Gluster, etc), and the > other half being Red Hat commercial products, especially Red Hat > Enterprise Linux OpenStack Platform. > > On our end of the table, we have several people who have said that > they want to spend some time in the booth, and perhaps run various > demos of the stuff they've been working on. > > If you're willing to spend some time in the booth, please let me or > Mike Burns know, so that we can have decent coverage. In particular, > Monday evening for the Booth Crawl is an important time to be sure we > have knowledgeable people at the booth to answer questions. Or, better yet, add your name to the etherpad at https://etherpad.openstack.org/p/rdo_summit_booth --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From pbrady at redhat.com Tue May 6 13:51:34 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 May 2014 14:51:34 +0100 Subject: [Rdo-list] Openstack-sahara broken on rhel6 In-Reply-To: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2F1@FRVDX100.fr01.awl.atosorigin.net> References: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> <5368C6D4.2030703@redhat.com> <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2F1@FRVDX100.fr01.awl.atosorigin.net> Message-ID: <5368E8E6.2040807@redhat.com> On 05/06/2014 01:01 PM, Dalmais Maxence wrote: > Hi P?draig, > > Thanks for you quick answer. > I successfully installed your last version of openstack-sahara with > > "yum install http://copr-be.cloud.fedoraproject.org/results/jruzicka/rdo-icehouse-epel-6/epel-6-x86_64/openstack-sahara-2014.1.0-11.el6/openstack-sahara-2014.1.0-11.el6.noarch.rpm" > > After this, what is the best way to upgrade python-sqlalchemy ? > With 0.5 version coming from base, we have error > " from sqlalchemy.orm import relationship > ImportError: cannot import name relationship" Ugh sorry there were multiple changes rolled into that sahara update. Your best immediate solution would be to downgrade to: http://kojipkgs.fedoraproject.org/packages/openstack-sahara/2014.1.0/6.el6/noarch/ After downgrading I advise also to rpm -e python-sqlalchemy to remove that older package, lest it interfere with future upgrades. The newer package referenced above will be available soon, with the appropriate updates available in the repo. thanks, P?draig. From maxence.dalmais at worldline.com Tue May 6 15:54:16 2014 From: maxence.dalmais at worldline.com (Dalmais Maxence) Date: Tue, 6 May 2014 17:54:16 +0200 Subject: [Rdo-list] Openstack-sahara broken on rhel6 In-Reply-To: <5368E8E6.2040807@redhat.com> References: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> <5368C6D4.2030703@redhat.com> <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2F1@FRVDX100.fr01.awl.atosorigin.net> <5368E8E6.2040807@redhat.com> Message-ID: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB39D@FRVDX100.fr01.awl.atosorigin.net> Hi, Thanks again for your support. I successfully installed your RPM and can launch Sahara. Nevertheless, I am not able to use it, because I am not able to register a new image. When I try to do so , I have in log from sahara, the error below : File "/usr/lib/python2.6/site-packages/sahara/service/validation.py", line 73, in handler raise e IOError: [Errno 2] No such file or directory: '/usr/lib/python2.6/site-packages/sahara/plugins/hdp/versions/version_1_3_2/resources/ambari-config-resource.json' After several attempts, I came to the conclusion that there is a problem with the tarball provided by openstack. Specs files use to build the RPM download tarballs from http://tarballs.openstack.org/sahara/sahara-2014.1.tar.gz In the tarball, the MANIFEST.in doesn't correspond to the content of the HDP plugins. Manifest.in as include sahara/plugins/hdp/versions/1_3_2/resources/*.template include sahara/plugins/hdp/versions/1_3_2/resources/*.json include sahara/plugins/hdp/versions/2_0/resources/*.template include sahara/plugins/hdp/versions/2_0/resources/*.json whereas file in the archive are sahara/plugins/hdp/versions/version_1_3_2/resources/ambari-config-resource.json sahara/plugins/hdp/versions/version_1_3_2/resources/default-cluster.template sahara/plugins/hdp/versions/version_2_0_6/resources/ambari-config-resource.json sahara/plugins/hdp/versions/version_2_0_6/resources/default-cluster.template 1-3-2 and version_1_3_2 doesn't match. Same for 2_0 and version_2_0_6 Because manifest is not good, the resources files are not included in the binary rpm. Do you think I am right, or am I missing something ? Does anybody run a recent version of sahara with success ? Maxence -----Message d'origine----- De : P?draig Brady [mailto:pbrady at redhat.com] Envoy? : mardi 6 mai 2014 15:52 ? : Dalmais Maxence Cc : rdo-list at redhat.com Objet : Re: [Rdo-list] Openstack-sahara broken on rhel6 On 05/06/2014 01:01 PM, Dalmais Maxence wrote: > Hi P?draig, > > Thanks for you quick answer. > I successfully installed your last version of openstack-sahara with > > "yum install http://copr-be.cloud.fedoraproject.org/results/jruzicka/rdo-icehouse-epel-6/epel-6-x86_64/openstack-sahara-2014.1.0-11.el6/openstack-sahara-2014.1.0-11.el6.noarch.rpm" > > After this, what is the best way to upgrade python-sqlalchemy ? > With 0.5 version coming from base, we have error " from sqlalchemy.orm > import relationship > ImportError: cannot import name relationship" Ugh sorry there were multiple changes rolled into that sahara update. Your best immediate solution would be to downgrade to: http://kojipkgs.fedoraproject.org/packages/openstack-sahara/2014.1.0/6.el6/noarch/ After downgrading I advise also to rpm -e python-sqlalchemy to remove that older package, lest it interfere with future upgrades. The newer package referenced above will be available soon, with the appropriate updates available in the repo. thanks, P?draig. Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la responsabilit? de Worldline ne pourra ?tre recherch?e quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis. This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted. From pbrady at redhat.com Tue May 6 16:54:16 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 May 2014 17:54:16 +0100 Subject: [Rdo-list] Openstack-sahara broken on rhel6 In-Reply-To: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB39D@FRVDX100.fr01.awl.atosorigin.net> References: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> <5368C6D4.2030703@redhat.com> <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2F1@FRVDX100.fr01.awl.atosorigin.net> <5368E8E6.2040807@redhat.com> <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB39D@FRVDX100.fr01.awl.atosorigin.net> Message-ID: <536913B8.7080902@redhat.com> On 05/06/2014 04:54 PM, Dalmais Maxence wrote: > Hi, > > Thanks again for your support. > I successfully installed your RPM and can launch Sahara. > > Nevertheless, I am not able to use it, because I am not able to register a new image. > When I try to do so , I have in log from sahara, the error below : > File "/usr/lib/python2.6/site-packages/sahara/service/validation.py", line 73, in handler > raise e > IOError: [Errno 2] No such file or directory: '/usr/lib/python2.6/site-packages/sahara/plugins/hdp/versions/version_1_3_2/resources/ambari-config-resource.json' > > After several attempts, I came to the conclusion that there is a problem with the tarball provided by openstack. > Specs files use to build the RPM download tarballs from http://tarballs.openstack.org/sahara/sahara-2014.1.tar.gz > > In the tarball, the MANIFEST.in doesn't correspond to the content of the HDP plugins. > > Manifest.in as > include sahara/plugins/hdp/versions/1_3_2/resources/*.template > include sahara/plugins/hdp/versions/1_3_2/resources/*.json > include sahara/plugins/hdp/versions/2_0/resources/*.template > include sahara/plugins/hdp/versions/2_0/resources/*.json > > whereas file in the archive are > sahara/plugins/hdp/versions/version_1_3_2/resources/ambari-config-resource.json > sahara/plugins/hdp/versions/version_1_3_2/resources/default-cluster.template > sahara/plugins/hdp/versions/version_2_0_6/resources/ambari-config-resource.json > sahara/plugins/hdp/versions/version_2_0_6/resources/default-cluster.template > > 1-3-2 and version_1_3_2 doesn't match. > Same for 2_0 and version_2_0_6 > > Because manifest is not good, the resources files are not included in the binary rpm. > > Do you think I am right, or am I missing something ? > Does anybody run a recent version of sahara with success ? Good analysis. Mike seems like it would be good to add the above adjustments as a patch to Manifest.in (and also the alembic adjustments we previously did), at the start of the %prep. Dalmais as an unsupported/untested quick hack I kicked off a scratch build which might get you over the immediate issue. rpm -Uvh http://kojipkgs.fedoraproject.org//work/tasks/9695/6819695/openstack-sahara-2014.1.0-6.1.el6.noarch.rpm thanks, P?draig. From elias.moreno.tec at gmail.com Tue May 6 16:57:18 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Tue, 6 May 2014 12:27:18 -0430 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: <20140506083901.GA12668@tesla.redhat.com> References: <20140506083901.GA12668@tesla.redhat.com> Message-ID: Hi thanks for the answers! But how is the support right now in OpenStack with centos/fedora images regarding the auto resizing during boot? does the disk size set in the flavor is respected or not, or does it work only with fedora and newer kernels than what CentOS uses...things like that is what I'm looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" wrote: > On Mon, May 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > Hello all, > > > > I would like to know what's the current state of auto resizing the root > > partition in current RDO Icehouse, more specifically, CentOS and Fedora > > images. > > > > I've read many versions of the story so I'm not really sure what works > and > > what doesn't. > > > > For instance, I've read that currently, auto resizing of a CentOS 6.5 > image > > for would require the filesystem to be ext3 and I've also read that auto > > resizing currently works only with kernels >= 3.8, so what's really the > > deal with this currently? > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot and > > cloud-initramfs-tools installed on the image or are there any other steps > > required for the auto resizing to work? > > > I personally find[1] virt-resize (which works the same way on any > images) very useful when I'd like to do resizing, as it works consistent > well. > > I just tried on a Fedora 20 qcow2 cloud image with these below four > commands > and their complete output. > > 1. Examine the root filesystem size _inside_ the cloud image: > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > Name Type VFS Label MBR Size Parent > /dev/sda1 filesystem ext4 _/ - 1.9G - > /dev/sda1 partition - - 83 1.9G /dev/sda > /dev/sda device - - - 2.0G - > > 2. Create a new qcow2 disk of 10G: > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > newdisk.qcow2 10G > > 3. Perform the resize operation: > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > newdisk.qcow2 > Examining fedora-latest.x86_64.qcow2 ... > ********** > > Summary of changes: > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > filesystem ext4 on /dev/sda1 will be expanded using the 'resize2fs' > method. > > ********** > Setting up initial partition table on newdisk.qcow2 ... > Copying /dev/sda1 ... > 100% > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > 00:00 > Expanding /dev/sda1 using the 'resize2fs' method ... > > Resize operation completed with no errors. Before deleting the old > disk, carefully check that the resized disk boots and works correctly. > > 4. Examine the root file system size in the new disk (should reflect > correctly): > > $ virt-filesystems --long --all -h -a newdisk.qcow2 > Name Type VFS Label MBR Size Parent > /dev/sda1 filesystem ext4 _/ - 10G - > /dev/sda1 partition - - 83 10G /dev/sda > /dev/sda device - - - 10G - > > > Hope that helps. > > > [1] > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ > > > > -- > /kashyap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mimccune at redhat.com Tue May 6 17:05:15 2014 From: mimccune at redhat.com (Michael McCune) Date: Tue, 6 May 2014 13:05:15 -0400 (EDT) Subject: [Rdo-list] Openstack-sahara broken on rhel6 In-Reply-To: <536913B8.7080902@redhat.com> References: <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2D4@FRVDX100.fr01.awl.atosorigin.net> <5368C6D4.2030703@redhat.com> <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB2F1@FRVDX100.fr01.awl.atosorigin.net> <5368E8E6.2040807@redhat.com> <6B62F3E8C02D3E40AB7DBD3D5F2647D930942FB39D@FRVDX100.fr01.awl.atosorigin.net> <536913B8.7080902@redhat.com> Message-ID: <402369576.764444.1399395915141.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On 05/06/2014 04:54 PM, Dalmais Maxence wrote: > > Hi, > > > > Thanks again for your support. > > I successfully installed your RPM and can launch Sahara. > > > > Nevertheless, I am not able to use it, because I am not able to register a > > new image. > > When I try to do so , I have in log from sahara, the error below : > > File > > "/usr/lib/python2.6/site-packages/sahara/service/validation.py", > > line 73, in handler > > raise e > > IOError: [Errno 2] No such file or directory: > > '/usr/lib/python2.6/site-packages/sahara/plugins/hdp/versions/version_1_3_2/resources/ambari-config-resource.json' > > > > After several attempts, I came to the conclusion that there is a problem > > with the tarball provided by openstack. > > Specs files use to build the RPM download tarballs from > > http://tarballs.openstack.org/sahara/sahara-2014.1.tar.gz > > > > In the tarball, the MANIFEST.in doesn't correspond to the content of the > > HDP plugins. > > > > Manifest.in as > > include sahara/plugins/hdp/versions/1_3_2/resources/*.template > > include sahara/plugins/hdp/versions/1_3_2/resources/*.json > > include sahara/plugins/hdp/versions/2_0/resources/*.template > > include sahara/plugins/hdp/versions/2_0/resources/*.json > > > > whereas file in the archive are > > sahara/plugins/hdp/versions/version_1_3_2/resources/ambari-config-resource.json > > sahara/plugins/hdp/versions/version_1_3_2/resources/default-cluster.template > > sahara/plugins/hdp/versions/version_2_0_6/resources/ambari-config-resource.json > > sahara/plugins/hdp/versions/version_2_0_6/resources/default-cluster.template > > > > 1-3-2 and version_1_3_2 doesn't match. > > Same for 2_0 and version_2_0_6 > > > > Because manifest is not good, the resources files are not included in the > > binary rpm. > > > > Do you think I am right, or am I missing something ? > > Does anybody run a recent version of sahara with success ? > > Good analysis. > > Mike seems like it would be good to add the above adjustments > as a patch to Manifest.in (and also the alembic adjustments we previously > did), > at the start of the %prep. > > Dalmais as an unsupported/untested quick hack I kicked off a scratch build > which might get you over the immediate issue. > rpm -Uvh > http://kojipkgs.fedoraproject.org//work/tasks/9695/6819695/openstack-sahara-2014.1.0-6.1.el6.noarch.rpm > > thanks, > P?draig. > thanks P?draig, I'm working on a patch for this which I'll hopefully have into the pipeline in a few hours. There are a few other fixes which have gone in as well. I think the release I'm working on is up to openstack-sahara-2014.1.0-11. mike From dfv at eurotux.com Tue May 6 17:47:56 2014 From: dfv at eurotux.com (Diogo Vieira) Date: Tue, 6 May 2014 18:47:56 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: <5368B43D.2080903@redhat.com> References: <5368407D.5020907@redhat.com> <5368B43D.2080903@redhat.com> Message-ID: On May 6, 2014, at 11:06 AM, P?draig Brady wrote: > On 05/06/2014 09:47 AM, Diogo Vieira wrote: >> On May 6, 2014, at 2:53 AM, P?draig Brady > wrote: >> >>> What version of swift are you using? >>> >>> swift-1.13.1.rc2 could have permissions errors, >>> while we included a patch in the RDO icehouse swift-1.13.1 release to fix >>> http://pad.lv/1302700 which on first glance could be related? >>> >>> thanks, >>> P?draig. >> >> I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: >> >> # yum info openstack-swift >> Loaded plugins: priorities >> 196 packages excluded due to repository priority protections >> Installed Packages >> Name : openstack-swift >> Arch : noarch >> Version : 1.13.1 >> Release : 1.fc21 >> >> >> So the fix should already be present right? > > Yes, must be something else so. > > thanks, > P?draig. That's unfortunate then. One thing's for sure: these errors aren't supposed to happen right? If someone else has any idea of what could be the problem I would greatly appreciate since this is a recurring problem (even between different Openstack and Packstack versions, since it was tested in Havana and Icehouse). Thank you very much, Diogo Vieira From pbrady at redhat.com Tue May 6 18:35:38 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Tue, 06 May 2014 19:35:38 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: References: <5368407D.5020907@redhat.com> <5368B43D.2080903@redhat.com> Message-ID: <53692B7A.1010903@redhat.com> On 05/06/2014 06:47 PM, Diogo Vieira wrote: > On May 6, 2014, at 11:06 AM, P?draig Brady wrote: > >> On 05/06/2014 09:47 AM, Diogo Vieira wrote: >>> On May 6, 2014, at 2:53 AM, P?draig Brady > wrote: >>> >>>> What version of swift are you using? >>>> >>>> swift-1.13.1.rc2 could have permissions errors, >>>> while we included a patch in the RDO icehouse swift-1.13.1 release to fix >>>> http://pad.lv/1302700 which on first glance could be related? >>>> >>>> thanks, >>>> P?draig. >>> >>> I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: >>> >>> # yum info openstack-swift >>> Loaded plugins: priorities >>> 196 packages excluded due to repository priority protections >>> Installed Packages >>> Name : openstack-swift >>> Arch : noarch >>> Version : 1.13.1 >>> Release : 1.fc21 >>> >>> >>> So the fix should already be present right? >> >> Yes, must be something else so. >> >> thanks, >> P?draig. > > That's unfortunate then. One thing's for sure: these errors aren't supposed to happen right? > > If someone else has any idea of what could be the problem I would greatly appreciate since this is a recurring problem (even between different Openstack and Packstack versions, since it was tested in Havana and Icehouse). > > Thank you very much, > Diogo Vieira > Ah you see this in Havana repos also, that's NB info. Pete any ideas? thanks, P?draig. From gareth at openstacker.org Wed May 7 05:53:03 2014 From: gareth at openstacker.org (Kun Huang) Date: Wed, 7 May 2014 13:53:03 +0800 Subject: [Rdo-list] deploy RDO in non-internal environment Message-ID: Hi guys I want to use RDO as default openstack deployment in my lab. However there are many reasons servers could visit external network. So at least I should clone RDO-related repositories first, such as EPEL and the below is in my plan: epel, epel-testing, foreman, puppetlabs, rdo-release Are those enough? And anybody has tried? thanks :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 7 09:23:09 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 7 May 2014 14:53:09 +0530 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> Message-ID: <20140507092309.GC27320@tesla.pnq.redhat.com> On Tue, May 06, 2014 at 12:27:18PM -0430, El?as David wrote: > Hi thanks for the answers! > > But how is the support right now in OpenStack with centos/fedora images > regarding the auto resizing during boot? > does the disk size set in the > flavor is respected or not, or does it work only with fedora and newer > kernels than what CentOS uses... I haven't tested it on CentOS, but looks like Shake Chen has confirmed he's tested the resize feature (and reports that it works with ext4) on his reply to this thread. /kashyap > On May 6, 2014 4:09 AM, "Kashyap Chamarthy" wrote: > > > On Mon, May 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > > Hello all, > > > > > > I would like to know what's the current state of auto resizing the root > > > partition in current RDO Icehouse, more specifically, CentOS and Fedora > > > images. > > > > > > I've read many versions of the story so I'm not really sure what works > > and > > > what doesn't. > > > > > > For instance, I've read that currently, auto resizing of a CentOS 6.5 > > image > > > for would require the filesystem to be ext3 and I've also read that auto > > > resizing currently works only with kernels >= 3.8, so what's really the > > > deal with this currently? > > > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot and > > > cloud-initramfs-tools installed on the image or are there any other steps > > > required for the auto resizing to work? > > > > > > I personally find[1] virt-resize (which works the same way on any > > images) very useful when I'd like to do resizing, as it works consistent > > well. > > > > I just tried on a Fedora 20 qcow2 cloud image with these below four > > commands > > and their complete output. > > > > 1. Examine the root filesystem size _inside_ the cloud image: > > > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > > > Name Type VFS Label MBR Size Parent > > /dev/sda1 filesystem ext4 _/ - 1.9G - > > /dev/sda1 partition - - 83 1.9G /dev/sda > > /dev/sda device - - - 2.0G - > > > > 2. Create a new qcow2 disk of 10G: > > > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > > newdisk.qcow2 10G > > > > 3. Perform the resize operation: > > > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > > newdisk.qcow2 > > Examining fedora-latest.x86_64.qcow2 ... > > ********** > > > > Summary of changes: > > > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > > filesystem ext4 on /dev/sda1 will be expanded using the 'resize2fs' > > method. > > > > ********** > > Setting up initial partition table on newdisk.qcow2 ... > > Copying /dev/sda1 ... > > 100% > > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > > 00:00 > > Expanding /dev/sda1 using the 'resize2fs' method ... > > > > Resize operation completed with no errors. Before deleting the old > > disk, carefully check that the resized disk boots and works correctly. > > > > 4. Examine the root file system size in the new disk (should reflect > > correctly): > > > > $ virt-filesystems --long --all -h -a newdisk.qcow2 > > Name Type VFS Label MBR Size Parent > > /dev/sda1 filesystem ext4 _/ - 10G - > > /dev/sda1 partition - - 83 10G /dev/sda > > /dev/sda device - - - 10G - > > > > > > Hope that helps. > > > > > > [1] > > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ > > > > > > > > -- > > /kashyap > > -- /kashyap From pbrady at redhat.com Wed May 7 12:00:52 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 07 May 2014 13:00:52 +0100 Subject: [Rdo-list] deploy RDO in non-internal environment In-Reply-To: References: Message-ID: <536A2074.9050406@redhat.com> On 05/07/2014 06:53 AM, Kun Huang wrote: > Hi guys > > I want to use RDO as default openstack deployment in my lab. However there are many reasons servers could visit external network. So at least I should clone RDO-related repositories first, such as EPEL and the below is in my plan: > > epel, epel-testing, foreman, puppetlabs, rdo-release > > Are those enough? And anybody has tried? It should work, and the above are currently enough. I'd advise to manage the repo syncing automatically though with something like http://www.pulpproject.org/ or mrepo. thanks, P?draig. From lars at redhat.com Wed May 7 13:49:56 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 7 May 2014 09:49:56 -0400 Subject: [Rdo-list] OpenStack Summit booth In-Reply-To: <5368E3B0.9040107@redhat.com> References: <5368E3B0.9040107@redhat.com> Message-ID: <20140507134956.GK22559@redhat.com> On Tue, May 06, 2014 at 09:29:20AM -0400, Rich Bowen wrote: > If you're willing to spend some time in the booth, please let me or Mike > Burns know, so that we can have decent coverage. I am absolutely interested in helping out at the booth -- it seems like a great way to put some faces to the names I see around here. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ALLAN.L.ST.GEORGE at leidos.com Wed May 7 14:31:43 2014 From: ALLAN.L.ST.GEORGE at leidos.com (St. George, Allan L.) Date: Wed, 7 May 2014 14:31:43 +0000 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> Message-ID: I haven?t had the time to work with Icehouse yet, but I have outlined instruction that are used to create Havana CentOS images that resize automatically upon spawning via linux-rootfs-resize. If interested, I?ll forward it along. V/R, Allan From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse Hi thanks for the answers! But how is the support right now in OpenStack with centos/fedora images regarding the auto resizing during boot? does the disk size set in the flavor is respected or not, or does it work only with fedora and newer kernels than what CentOS uses...things like that is what I'm looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > wrote: On Mon, May 05, 2014 at 10:22:26PM -0430, El?as David wrote: > Hello all, > > I would like to know what's the current state of auto resizing the root > partition in current RDO Icehouse, more specifically, CentOS and Fedora > images. > > I've read many versions of the story so I'm not really sure what works and > what doesn't. > > For instance, I've read that currently, auto resizing of a CentOS 6.5 image > for would require the filesystem to be ext3 and I've also read that auto > resizing currently works only with kernels >= 3.8, so what's really the > deal with this currently? > > Also, it's as simple as having cloud-init, dracut-modules-growroot and > cloud-initramfs-tools installed on the image or are there any other steps > required for the auto resizing to work? I personally find[1] virt-resize (which works the same way on any images) very useful when I'd like to do resizing, as it works consistent well. I just tried on a Fedora 20 qcow2 cloud image with these below four commands and their complete output. 1. Examine the root filesystem size _inside_ the cloud image: $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 Name Type VFS Label MBR Size Parent /dev/sda1 filesystem ext4 _/ - 1.9G - /dev/sda1 partition - - 83 1.9G /dev/sda /dev/sda device - - - 2.0G - 2. Create a new qcow2 disk of 10G: $ qemu-img create -f qcow2 -o preallocation=metadata \ newdisk.qcow2 10G 3. Perform the resize operation: $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** Summary of changes: /dev/sda1: This partition will be resized from 1.9G to 10.0G. The filesystem ext4 on /dev/sda1 will be expanded using the 'resize2fs' method. ********** Setting up initial partition table on newdisk.qcow2 ... Copying /dev/sda1 ... 100% ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... Resize operation completed with no errors. Before deleting the old disk, carefully check that the resized disk boots and works correctly. 4. Examine the root file system size in the new disk (should reflect correctly): $ virt-filesystems --long --all -h -a newdisk.qcow2 Name Type VFS Label MBR Size Parent /dev/sda1 filesystem ext4 _/ - 10G - /dev/sda1 partition - - 83 10G /dev/sda /dev/sda device - - - 10G - Hope that helps. [1] http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ -- /kashyap -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramon at linux-labs.net Wed May 7 17:37:52 2014 From: ramon at linux-labs.net (Ramon Acedo) Date: Wed, 7 May 2014 18:37:52 +0100 Subject: [Rdo-list] deploy RDO in non-internal environment In-Reply-To: References: Message-ID: <79BD98FF-1E4D-45E6-97C7-6004720566BB@linux-labs.net> Hi Kun, On 7 May 2014, at 06:53, Kun Huang wrote: > Hi guys > > I want to use RDO as default openstack deployment in my lab. However there are many reasons servers could visit external network. So at least I should clone RDO-related repositories first, such as EPEL and the below is in my plan: > > epel, epel-testing, foreman, puppetlabs, rdo-release > > Are those enough? And anybody has tried? If you plan to use Foreman 1.5 (default with rdo-release-icehouse-3) you should also add the SCL repo. Ramon From gilles at redhat.com Thu May 8 01:25:05 2014 From: gilles at redhat.com (Gilles Dubreuil) Date: Wed, 7 May 2014 21:25:05 -0400 (EDT) Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> Message-ID: <462856808.4337683.1399512305300.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Ramon Acedo" > To: rdo-list at redhat.com > Sent: Wednesday, 30 April, 2014 2:14:59 AM > Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined > method `[]' for nil:NilClass) > > Hi all, > > I have been trying to test the OpenStack Foreman Installer with different > combinations of Foreman versions and of the installer itself (and even > different versions of Puppet) with no success so far. > > I know that Packstack alone works but I want to go all the way with multiple > hosts and bare metal provisioning to eventually use it for large deployments > and scale out Nova Compute and other services seamlessly. > > The error I get when running the foreman_server.sh script is always: > -------------- > rake aborted! > undefined method `[]' for nil:NilClass > > Tasks: TOP => db:seed > (See full trace by running task with --trace) > -------------- The above usually indicates there is something wrong with at least one puppet class. Do you have openstack-puppet-modules installed? Some of the devil's details: foreman_server.sh triggers foreman's rake to seed its database. The nil:NiClass means something is missing and usually when it happens, to be confirmed with rake's trace/logs it's because at least one puppet class is wrong (not validated). The above is happening because seeding script also parses puppet classes' parameters in order to inject them into Foreman. Cheers, Gilles > > After that, if Foreman starts, there?s nothing in the "Host groups" section > which is supposed to be prepopulated by the foreman_server.sh script (as > described in http://red.ht/1jdJ03q). > > The process I follow is very simple: > > 1. Install a clean RHEL 6.5 or CentOS 6.5 > > 2. Enable EPEL > > 3. Enable the rdo-release repo: > > a. rdo-release-havana-7: Foreman 1.3 and openstack-foreman-installer 1.0.6 > b. rdo-release-havana-8: Foreman 1.5 and openstack-foreman-installer 1.0.6 > c. rdo-release-icehouse-3: Foreman 1.5 and openstack-foreman-installer 2.0 > (as a note here, the SCL repo needs to be enabled before the next step > too). > > 4. Install openstack-foreman-installer > > 5. Create and export the needed variables: > > export PROVISIONING_INTERFACE=eth0 > export FOREMAN_GATEWAY=192.168.5.100 > export FOREMAN_PROVISIONING=true > > 6. Run the script foreman_server.sh from > /usr/share/openstack-foreman-installer/bin > > For 3a and 3b I also tried with an older version of Puppet (3.2) with the > same result. > > These are the full outputs: > > 3a: http://fpaste.org/97739/ (Havana and Foreman 1.3) > 3b: http://fpaste.org/97760/ (Havana and Foreman 1.3 with Puppet 3.2) > 3c: http://fpaste.org/97838/ (Icehouse and Foreman 1.5) > > I?m sure somebody in the list has tried to deploy and configure Foreman for > bare metal installations (DHCP+PXE) and the documentation and the > foreman_server.sh script suggest it should be possible in a fairly easy way. > > I filled a bug as it might well be one, pending confirmation: > https://bugzilla.redhat.com/show_bug.cgi?id=1092443 > > Any help is really appreciated! > > Many thanks. > > Ramon > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From kchamart at redhat.com Thu May 8 03:48:30 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 8 May 2014 09:18:30 +0530 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> Message-ID: <20140508034830.GB26928@tesla.redhat.com> On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: > I haven?t had the time to work with Icehouse yet, but I have outlined > instruction that are used to create Havana CentOS images that resize > automatically upon spawning via linux-rootfs-resize. > > If interested, I?ll forward it along. That'd be useful. It'd be even better if you could make a quick RDO wiki page[1] that'll be indexed by the search engines. [1] http://openstack.redhat.com/ PS: If you're a Markdown user, you can convert Markdown -> WikiMedia (RDO uses WikiMedia for wiki) trivially like this: $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] > Automatic resizing of root partitions in RDO Icehouse > > > Hi thanks for the answers! > > But how is the support right now in OpenStack with centos/fedora > images regarding the auto resizing during boot? does the disk size set > in the flavor is respected or not, or does it work only with fedora > and newer kernels than what CentOS uses...things like that is what I'm > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > > wrote: On Mon, May > 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > Hello all, > > > > I would like to know what's the current state of auto resizing the > > root partition in current RDO Icehouse, more specifically, CentOS > > and Fedora images. > > > > I've read many versions of the story so I'm not really sure what > > works and what doesn't. > > > > For instance, I've read that currently, auto resizing of a CentOS > > 6.5 image for would require the filesystem to be ext3 and I've also > > read that auto resizing currently works only with kernels >= 3.8, so > > what's really the deal with this currently? > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot > > and cloud-initramfs-tools installed on the image or are there any > > other steps required for the auto resizing to work? > > > I personally find[1] virt-resize (which works the same way on any > images) very useful when I'd like to do resizing, as it works > consistent well. > > I just tried on a Fedora 20 qcow2 cloud image with these below four > commands and their complete output. > > 1. Examine the root filesystem size _inside_ the cloud image: > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > Name Type VFS Label MBR Size Parent /dev/sda1 > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - > - 83 1.9G /dev/sda /dev/sda device - - - > 2.0G - > > 2. Create a new qcow2 disk of 10G: > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > newdisk.qcow2 10G > > 3. Perform the resize operation: > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** > > Summary of changes: > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > filesystem ext4 on /dev/sda1 will be expanded using the > 'resize2fs' method. > > ********** Setting up initial partition table on newdisk.qcow2 ... > Copying /dev/sda1 ... 100% > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... > > Resize operation completed with no errors. Before deleting the > old disk, carefully check that the resized disk boots and works > correctly. > > 4. Examine the root file system size in the new disk (should reflect > correctly): > > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name > Type VFS Label MBR Size Parent /dev/sda1 filesystem > ext4 _/ - 10G - /dev/sda1 partition - - 83 > 10G /dev/sda /dev/sda device - - - 10G - > > > Hope that helps. > > > [1] > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ > > > > -- /kashyap > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- /kashyap From rohara at redhat.com Thu May 8 06:03:56 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Thu, 8 May 2014 01:03:56 -0500 Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> Message-ID: <20140508060356.GB8170@redhat.com> On Tue, Apr 29, 2014 at 05:14:59PM +0100, Ramon Acedo wrote: > Hi all, > > I have been trying to test the OpenStack Foreman Installer with different combinations of Foreman versions and of the installer itself (and even different versions of Puppet) with no success so far. > > I know that Packstack alone works but I want to go all the way with multiple hosts and bare metal provisioning to eventually use it for large deployments and scale out Nova Compute and other services seamlessly. > > The error I get when running the foreman_server.sh script is always: > -------------- > rake aborted! > undefined method `[]' for nil:NilClass > > Tasks: TOP => db:seed > (See full trace by running task with --trace) > -------------- Are you by chance running foreman_server.sh from source? I've hit this a few times when I checkout the source, put it in /root/ and then the seed script fails because of permissions. If you are running from source, make sure the puppet user can read the source directory. Not sure is this applies in your case but it will cause the problem you're seeing. Ryan From shake.chen at gmail.com Thu May 8 06:23:19 2014 From: shake.chen at gmail.com (Shake Chen) Date: Thu, 8 May 2014 14:23:19 +0800 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: <20140508034830.GB26928@tesla.redhat.com> References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: Hi I use oz create centos image, work perfect for openstack, support resize. the step is very simple 1: hareware machine, install centos6.5 disable selinux enable epel 2: install OZ yum -y install oz modify the oz setting , let default image is qcow2 /etc/oz/oz.cfg image_type = qcow2 restart machine. 3: create two file centos6.ks and centos65.tdl check the attachment. you only need change http://172.28.0.1/cobbler/ks_mirror/CentOS6.5-x86_64/ to like below link http://mirrors.163.com/centos/6.5/os/x86_64/ 4: run the command oz-install -p -u -d3 -a centos6.ks centos65.tdl the image would store /var/lib/libvirt/images 5: compress the image qemu-img convert -c /var/lib/libvirt/images/centos_65_x86_64.qcow2 -O qcow2 \ /root/centos_65_x86_64.qcow2 Now the image is ok, upload to openstack the image only support key login username is cloud-user you can check the centos6.ks ,the ks is change from http://repos.fedorapeople.org/repos/openstack/guest-images/ I also upload the image, you can try it. http://yunpan.cn/QiQ6syasRAH7Q password: 90e3 On Thu, May 8, 2014 at 11:48 AM, Kashyap Chamarthy wrote: > On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: > > I haven?t had the time to work with Icehouse yet, but I have outlined > > instruction that are used to create Havana CentOS images that resize > > automatically upon spawning via linux-rootfs-resize. > > > > If interested, I?ll forward it along. > > That'd be useful. It'd be even better if you could make a quick RDO wiki > page[1] that'll be indexed by the search engines. > > > [1] http://openstack.redhat.com/ > > PS: If you're a Markdown user, you can convert Markdown -> WikiMedia > (RDO uses WikiMedia for wiki) trivially like this: > > $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki > > > > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: > > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] > > Automatic resizing of root partitions in RDO Icehouse > > > > > > Hi thanks for the answers! > > > > But how is the support right now in OpenStack with centos/fedora > > images regarding the auto resizing during boot? does the disk size set > > in the flavor is respected or not, or does it work only with fedora > > and newer kernels than what CentOS uses...things like that is what I'm > > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > > > wrote: On Mon, May > > 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > > Hello all, > > > > > > I would like to know what's the current state of auto resizing the > > > root partition in current RDO Icehouse, more specifically, CentOS > > > and Fedora images. > > > > > > I've read many versions of the story so I'm not really sure what > > > works and what doesn't. > > > > > > For instance, I've read that currently, auto resizing of a CentOS > > > 6.5 image for would require the filesystem to be ext3 and I've also > > > read that auto resizing currently works only with kernels >= 3.8, so > > > what's really the deal with this currently? > > > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot > > > and cloud-initramfs-tools installed on the image or are there any > > > other steps required for the auto resizing to work? > > > > > > I personally find[1] virt-resize (which works the same way on any > > images) very useful when I'd like to do resizing, as it works > > consistent well. > > > > I just tried on a Fedora 20 qcow2 cloud image with these below four > > commands and their complete output. > > > > 1. Examine the root filesystem size _inside_ the cloud image: > > > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > > > Name Type VFS Label MBR Size Parent /dev/sda1 > > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - > > - 83 1.9G /dev/sda /dev/sda device - - - > > 2.0G - > > > > 2. Create a new qcow2 disk of 10G: > > > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > > newdisk.qcow2 10G > > > > 3. Perform the resize operation: > > > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** > > > > Summary of changes: > > > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > > filesystem ext4 on /dev/sda1 will be expanded using the > > 'resize2fs' method. > > > > ********** Setting up initial partition table on newdisk.qcow2 ... > > Copying /dev/sda1 ... 100% > > > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... > > > > Resize operation completed with no errors. Before deleting the > > old disk, carefully check that the resized disk boots and works > > correctly. > > > > 4. Examine the root file system size in the new disk (should reflect > > correctly): > > > > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name > > Type VFS Label MBR Size Parent /dev/sda1 filesystem > > ext4 _/ - 10G - /dev/sda1 partition - - 83 > > 10G /dev/sda /dev/sda device - - - 10G - > > > > > > Hope that helps. > > > > > > [1] > > > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ > > > > > > > > -- /kashyap > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- > /kashyap > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: centos65.tdl Type: application/octet-stream Size: 368 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: centos6.ks Type: application/octet-stream Size: 3001 bytes Desc: not available URL: From red at fedoraproject.org Thu May 8 11:27:00 2014 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Thu, 8 May 2014 20:27:00 +0900 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: Message-ID: On Tue, May 6, 2014 at 11:52 AM, El?as David wrote: > Hello all, > > I would like to know what's the current state of auto resizing the root > partition in current RDO Icehouse, more specifically, CentOS and Fedora > images. I don't see how the OpenStack version matters for this, so if Icehouse does something special, ignore what I say. Fedora's official F20 images support it through cloud-utils (i.e. the cloud-utils-growpart package): http://fedoraproject.org/get-fedora#clouds Upstream of cloud-utils: https://launchpad.net/cloud-utils You can also find the kickstart file used to create the images here: https://git.fedorahosted.org/cgit/spin-kickstarts.git/tree/fedora-cloud-base.ks I think cloud-utils uses sfdisk (mbr) or sgdisk (gpt) for the actual resizing, so anything those tools support should probably be supported by growpart. -- Sandro From ramon at linux-labs.net Thu May 8 15:23:07 2014 From: ramon at linux-labs.net (Ramon Acedo) Date: Thu, 8 May 2014 16:23:07 +0100 Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <462856808.4337683.1399512305300.JavaMail.zimbra@redhat.com> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> <462856808.4337683.1399512305300.JavaMail.zimbra@redhat.com> Message-ID: <076E6FCE-E4EB-4D63-810A-6B87315536CF@linux-labs.net> On 8 May 2014, at 02:25, Gilles Dubreuil wrote: > > > ----- Original Message ----- >> From: "Ramon Acedo" >> To: rdo-list at redhat.com >> Sent: Wednesday, 30 April, 2014 2:14:59 AM >> Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined >> method `[]' for nil:NilClass) >> >> Hi all, >> >> I have been trying to test the OpenStack Foreman Installer with different >> combinations of Foreman versions and of the installer itself (and even >> different versions of Puppet) with no success so far. >> >> I know that Packstack alone works but I want to go all the way with multiple >> hosts and bare metal provisioning to eventually use it for large deployments >> and scale out Nova Compute and other services seamlessly. >> >> The error I get when running the foreman_server.sh script is always: >> -------------- >> rake aborted! >> undefined method `[]' for nil:NilClass >> >> Tasks: TOP => db:seed >> (See full trace by running task with --trace) >> -------------- > > The above usually indicates there is something wrong with at least one puppet class. > Do you have openstack-puppet-modules installed? > > Some of the devil's details: > foreman_server.sh triggers foreman's rake to seed its database. > The nil:NiClass means something is missing and usually when it happens, to be confirmed with rake's trace/logs it's because at least one puppet class is wrong (not validated). > The above is happening because seeding script also parses puppet classes' parameters in order to inject them into Foreman. After modifying the script to use "rake ?trace? this is what I got: + sudo -u foreman scl enable ruby193 'cd /usr/share/foreman; rake --trace db:seed RAILS_ENV=production FOREMAN_PROVISIONING=true' ** Invoke db:seed (first_time) ** Execute db:seed ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment ** Invoke db:load_config (first_time) ** Execute db:load_config ** Execute db:abort_if_pending_migrations Seeding /usr/share/foreman/db/seeds.d/05-architectures.rb Seeding /usr/share/foreman/db/seeds.d/07-config_templates.rb Seeding /usr/share/foreman/db/seeds.d/08-partition_tables.rb Seeding /usr/share/foreman/db/seeds.d/10-installation_media.rb Seeding /usr/share/foreman/db/seeds.d/11-permissions.rb Seeding /usr/share/foreman/db/seeds.d/11-roles.rb Seeding /usr/share/foreman/db/seeds.d/11-smart_proxy_features.rb Seeding /usr/share/foreman/db/seeds.d/12-auth_sources.rb Seeding /usr/share/foreman/db/seeds.d/13-compute_profiles.rb Seeding /usr/share/foreman/db/seeds.d/15-bookmarks.rb Seeding /usr/share/foreman/db/seeds.d/99-quickstack.rb rake aborted! undefined method `[]' for nil:NilClass I?m still trying to further debug it. > Cheers, > Gilles > >> >> After that, if Foreman starts, there?s nothing in the "Host groups" section >> which is supposed to be prepopulated by the foreman_server.sh script (as >> described in http://red.ht/1jdJ03q). >> >> The process I follow is very simple: >> >> 1. Install a clean RHEL 6.5 or CentOS 6.5 >> >> 2. Enable EPEL >> >> 3. Enable the rdo-release repo: >> >> a. rdo-release-havana-7: Foreman 1.3 and openstack-foreman-installer 1.0.6 >> b. rdo-release-havana-8: Foreman 1.5 and openstack-foreman-installer 1.0.6 >> c. rdo-release-icehouse-3: Foreman 1.5 and openstack-foreman-installer 2.0 >> (as a note here, the SCL repo needs to be enabled before the next step >> too). >> >> 4. Install openstack-foreman-installer >> >> 5. Create and export the needed variables: >> >> export PROVISIONING_INTERFACE=eth0 >> export FOREMAN_GATEWAY=192.168.5.100 >> export FOREMAN_PROVISIONING=true >> >> 6. Run the script foreman_server.sh from >> /usr/share/openstack-foreman-installer/bin >> >> For 3a and 3b I also tried with an older version of Puppet (3.2) with the >> same result. >> >> These are the full outputs: >> >> 3a: http://fpaste.org/97739/ (Havana and Foreman 1.3) >> 3b: http://fpaste.org/97760/ (Havana and Foreman 1.3 with Puppet 3.2) >> 3c: http://fpaste.org/97838/ (Icehouse and Foreman 1.5) >> >> I?m sure somebody in the list has tried to deploy and configure Foreman for >> bare metal installations (DHCP+PXE) and the documentation and the >> foreman_server.sh script suggest it should be possible in a fairly easy way. >> >> I filled a bug as it might well be one, pending confirmation: >> https://bugzilla.redhat.com/show_bug.cgi?id=1092443 >> >> Any help is really appreciated! >> >> Many thanks. >> >> Ramon >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramon at linux-labs.net Thu May 8 15:27:15 2014 From: ramon at linux-labs.net (Ramon Acedo) Date: Thu, 8 May 2014 16:27:15 +0100 Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <20140508060356.GB8170@redhat.com> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> <20140508060356.GB8170@redhat.com> Message-ID: <58B97D82-CB30-41D0-AB32-72BAD5B37ABA@linux-labs.net> On 8 May 2014, at 07:03, Ryan O'Hara wrote: > On Tue, Apr 29, 2014 at 05:14:59PM +0100, Ramon Acedo wrote: >> Hi all, >> >> I have been trying to test the OpenStack Foreman Installer with different combinations of Foreman versions and of the installer itself (and even different versions of Puppet) with no success so far. >> >> I know that Packstack alone works but I want to go all the way with multiple hosts and bare metal provisioning to eventually use it for large deployments and scale out Nova Compute and other services seamlessly. >> >> The error I get when running the foreman_server.sh script is always: >> -------------- >> rake aborted! >> undefined method `[]' for nil:NilClass >> >> Tasks: TOP => db:seed >> (See full trace by running task with --trace) >> -------------- > > Are you by chance running foreman_server.sh from source? I've hit this > a few times when I checkout the source, put it in /root/ and then the > seed script fails because of permissions. If you are running from > source, make sure the puppet user can read the source directory. > > Not sure is this applies in your case but it will cause the problem > you're seeing. No, I?m using the script coming with the package. I tried two different versions, last one 2.0 which I think it?s the current one upstream. Thanks for having a look. > > Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramon at linux-labs.net Thu May 8 15:20:21 2014 From: ramon at linux-labs.net (Ramon Acedo) Date: Thu, 8 May 2014 16:20:21 +0100 Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <20140508060356.GB8170@redhat.com> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> <20140508060356.GB8170@redhat.com> Message-ID: <318FE75C-3F15-475A-9448-EA314674BE59@linux-labs.net> On 8 May 2014, at 07:03, Ryan O'Hara wrote: > On Tue, Apr 29, 2014 at 05:14:59PM +0100, Ramon Acedo wrote: >> Hi all, >> >> I have been trying to test the OpenStack Foreman Installer with different combinations of Foreman versions and of the installer itself (and even different versions of Puppet) with no success so far. >> >> I know that Packstack alone works but I want to go all the way with multiple hosts and bare metal provisioning to eventually use it for large deployments and scale out Nova Compute and other services seamlessly. >> >> The error I get when running the foreman_server.sh script is always: >> -------------- >> rake aborted! >> undefined method `[]' for nil:NilClass >> >> Tasks: TOP => db:seed >> (See full trace by running task with --trace) >> -------------- > > Are you by chance running foreman_server.sh from source? I've hit this > a few times when I checkout the source, put it in /root/ and then the > seed script fails because of permissions. If you are running from > source, make sure the puppet user can read the source directory. > > Not sure is this applies in your case but it will cause the problem > you're seeing. > > Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From elias.moreno.tec at gmail.com Thu May 8 17:30:31 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Thu, 8 May 2014 13:00:31 -0430 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: Thanks a lot! I'll be trying this today or tomorrow and report back On May 8, 2014 1:53 AM, "Shake Chen" wrote: > Hi > > I use oz create centos image, work perfect for openstack, support resize. > > the step is very simple > > 1: hareware machine, install centos6.5 > disable selinux > enable epel > > 2: install OZ > > yum -y install oz > > modify the oz setting , let default image is qcow2 > > /etc/oz/oz.cfg > image_type = qcow2 > > restart machine. > > 3: create two file centos6.ks and centos65.tdl > > check the attachment. you only need change > > http://172.28.0.1/cobbler/ks_mirror/CentOS6.5-x86_64/ > > to like below link > > http://mirrors.163.com/centos/6.5/os/x86_64/ > > > 4: run the command > > oz-install -p -u -d3 -a centos6.ks centos65.tdl > > the image would store /var/lib/libvirt/images > > 5: compress the image > qemu-img convert -c /var/lib/libvirt/images/centos_65_x86_64.qcow2 -O > qcow2 \ > /root/centos_65_x86_64.qcow2 > > Now the image is ok, upload to openstack > > the image only support key login > > username is cloud-user > > you can check the centos6.ks ,the ks is change from > http://repos.fedorapeople.org/repos/openstack/guest-images/ > > I also upload the image, you can try it. > > http://yunpan.cn/QiQ6syasRAH7Q > > password: 90e3 > > > > > > > > > > > > > > > > > > > > > On Thu, May 8, 2014 at 11:48 AM, Kashyap Chamarthy wrote: > >> On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: >> > I haven?t had the time to work with Icehouse yet, but I have outlined >> > instruction that are used to create Havana CentOS images that resize >> > automatically upon spawning via linux-rootfs-resize. >> > >> > If interested, I?ll forward it along. >> >> That'd be useful. It'd be even better if you could make a quick RDO wiki >> page[1] that'll be indexed by the search engines. >> >> >> [1] http://openstack.redhat.com/ >> >> PS: If you're a Markdown user, you can convert Markdown -> WikiMedia >> (RDO uses WikiMedia for wiki) trivially like this: >> >> $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki >> >> > >> > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] >> > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: >> > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] >> > Automatic resizing of root partitions in RDO Icehouse >> > >> > >> > Hi thanks for the answers! >> > >> > But how is the support right now in OpenStack with centos/fedora >> > images regarding the auto resizing during boot? does the disk size set >> > in the flavor is respected or not, or does it work only with fedora >> > and newer kernels than what CentOS uses...things like that is what I'm >> > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" >> > > wrote: On Mon, May >> > 05, 2014 at 10:22:26PM -0430, El?as David wrote: >> > > Hello all, >> > > >> > > I would like to know what's the current state of auto resizing the >> > > root partition in current RDO Icehouse, more specifically, CentOS >> > > and Fedora images. >> > > >> > > I've read many versions of the story so I'm not really sure what >> > > works and what doesn't. >> > > >> > > For instance, I've read that currently, auto resizing of a CentOS >> > > 6.5 image for would require the filesystem to be ext3 and I've also >> > > read that auto resizing currently works only with kernels >= 3.8, so >> > > what's really the deal with this currently? >> > > >> > > Also, it's as simple as having cloud-init, dracut-modules-growroot >> > > and cloud-initramfs-tools installed on the image or are there any >> > > other steps required for the auto resizing to work? >> > >> > >> > I personally find[1] virt-resize (which works the same way on any >> > images) very useful when I'd like to do resizing, as it works >> > consistent well. >> > >> > I just tried on a Fedora 20 qcow2 cloud image with these below four >> > commands and their complete output. >> > >> > 1. Examine the root filesystem size _inside_ the cloud image: >> > >> > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 >> > >> > Name Type VFS Label MBR Size Parent /dev/sda1 >> > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - >> > - 83 1.9G /dev/sda /dev/sda device - - - >> > 2.0G - >> > >> > 2. Create a new qcow2 disk of 10G: >> > >> > $ qemu-img create -f qcow2 -o preallocation=metadata \ >> > newdisk.qcow2 10G >> > >> > 3. Perform the resize operation: >> > >> > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ >> > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** >> > >> > Summary of changes: >> > >> > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The >> > filesystem ext4 on /dev/sda1 will be expanded using the >> > 'resize2fs' method. >> > >> > ********** Setting up initial partition table on newdisk.qcow2 ... >> > Copying /dev/sda1 ... 100% >> > >> ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? >> > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... >> > >> > Resize operation completed with no errors. Before deleting the >> > old disk, carefully check that the resized disk boots and works >> > correctly. >> > >> > 4. Examine the root file system size in the new disk (should reflect >> > correctly): >> > >> > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name >> > Type VFS Label MBR Size Parent /dev/sda1 filesystem >> > ext4 _/ - 10G - /dev/sda1 partition - - 83 >> > 10G /dev/sda /dev/sda device - - - 10G - >> > >> > >> > Hope that helps. >> > >> > >> > [1] >> > >> http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestfs-tools/ >> > >> > >> > >> > -- /kashyap >> >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> -- >> /kashyap >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > -- > Shake Chen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gilles at redhat.com Thu May 8 22:25:39 2014 From: gilles at redhat.com (Gilles Dubreuil) Date: Thu, 8 May 2014 18:25:39 -0400 (EDT) Subject: [Rdo-list] Can't Deploy Foreman with openstack-foreman-installer for Bare Metal Provisioning (undefined method `[]' for nil:NilClass) In-Reply-To: <58B97D82-CB30-41D0-AB32-72BAD5B37ABA@linux-labs.net> References: <21851A5F-40F4-46BE-BF94-AA0F49A56793@linux-labs.net> <20140508060356.GB8170@redhat.com> <58B97D82-CB30-41D0-AB32-72BAD5B37ABA@linux-labs.net> Message-ID: <1073432226.4869560.1399587939175.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Ramon Acedo" > > Hi all, > > I have been trying to test the OpenStack Foreman Installer with different > combinations of Foreman versions and of the installer itself (and even > different versions of Puppet) with no success so far. > > I know that Packstack alone works but I want to go all the way with multiple > hosts and bare metal provisioning to eventually use it for large deployments > and scale out Nova Compute and other services seamlessly. > > The error I get when running the foreman_server.sh script is always: > -------------- > rake aborted! > undefined method `[]' for nil:NilClass > > Tasks: TOP => db:seed > (See full trace by running task with --trace) > -------------- > > Are you by chance running foreman_server.sh from source? I've hit this > a few times when I checkout the source, put it in /root/ and then the > seed script fails because of permissions. If you are running from > source, make sure the puppet user can read the source directory. > > Not sure is this applies in your case but it will cause the problem > you're seeing. > > No, I?m using the script coming with the package. I tried two different > versions, last one 2.0 which I think it?s the current one upstream. > If you're using vanilla packages then please fill a bug. > Thanks for having a look. Thanks, Gilles From mail-lists at karan.org Thu May 8 23:51:32 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Fri, 09 May 2014 00:51:32 +0100 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: Message-ID: <536C1884.3000103@karan.org> On 05/06/2014 03:52 AM, El?as David wrote: > Hello all, > > I would like to know what's the current state of auto resizing the root > partition in current RDO Icehouse, more specifically, CentOS and Fedora > images. yum install dracut-modules-growroot.noarch && reboot for even more win, just install that ( from epel on CentOS ) when you build the image, so its run automagically on instantiation. -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From ALLAN.L.ST.GEORGE at leidos.com Fri May 9 13:04:56 2014 From: ALLAN.L.ST.GEORGE at leidos.com (St. George, Allan L.) Date: Fri, 9 May 2014 13:04:56 +0000 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: <20140508034830.GB26928@tesla.redhat.com> References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: I'm sure someone could make this better, but this is what I've been using and it works well: V/R, Allan 1. Create disk image with QCOW2 format qemu-img create -f qcow2 /tmp/centos-6.5-working.qcow2 10G 2. Install CentOS; Install onto a single ext4 partition mounted to ?/? (no /boot, /swap, etc.) virt-install --virt-type {kvm or qemu} --name centos-6.5 --ram 1024 \ --cdrom=/tmp/CentOS-6.5-x86_64-minimal.iso \ --disk /tmp/centos-6.5-working.qcow2,format=qcow2 \ --network network=default \ --graphics vnc,listen=0.0.0.0 --noautoconsole \ --os-type=linux --os-variant=rhel6 3. Eject the disk and reboot the virtual machine virsh attach-disk --type cdrom --mode readonly centos-6.5 "" hdc virsh destroy centos-6.5 virsh start centos-6.5 4. After reboot, login into your new image and modify '/etc/sysconfig/network-scripts/ifcfg-eth0' to look like this DEVICE="eth0" BOOTPROTO="dhcp" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" 5. Add EPEL repository and update OS rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm 6. Update yum and install cloud-init yum -y update yum install cloud-utils cloud-init parted git cd /tmp git clone https://github.com/flegmatik/linux-rootfs-resize.git (installed in place of cloud-initramfs-tools) cd linux-rootfs-resize ./install Edit /etc/cloud/cloud.cfg Add the line: user: ec2-user Under ?cloud_init_modules?, add: - resolv-conf 7. Install and configure puppet yum install puppet edit /etc/hosts and add entry for foreman edit /etc/puppet/puppet.conf and add the following lines: [main] pluginsync = true [agent] runinterval=1800 server = {server.domain} chkconfig puppet on 8. Enable the instance to access the metadata service echo "NOZEROCONF=yes" >> /etc/sysconfig/network 9. Configure /etc/ssh/sshd_config Uncomment the following lines: PermitRootLogin yes PasswordAuthentication yes 10. Power down your virtual Centos machine 11. Clean up the virtual machine of MAC address, etc. virt-sysprep -d centos-6.5 12. Undefine the libvirt domain virsh undefine centos-6.5 13. Compress QCOW2 image with qemu-img convert -c /tmp/centos-6.5-working.qcow2 -O qcow2 /tmp/centos.qcow2 Image /tmp/centos-6.5.qcow2 is now ready for upload to Openstack -----Original Message----- From: Kashyap Chamarthy [mailto:kchamart at redhat.com] Sent: Wednesday, May 07, 2014 11:49 PM To: St. George, Allan L. Cc: rdo-list at redhat.com; El?as David Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: > I haven?t had the time to work with Icehouse yet, but I have outlined > instruction that are used to create Havana CentOS images that resize > automatically upon spawning via linux-rootfs-resize. > > If interested, I?ll forward it along. That'd be useful. It'd be even better if you could make a quick RDO wiki page[1] that'll be indexed by the search engines. [1] http://openstack.redhat.com/ PS: If you're a Markdown user, you can convert Markdown -> WikiMedia (RDO uses WikiMedia for wiki) trivially like this: $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] > Automatic resizing of root partitions in RDO Icehouse > > > Hi thanks for the answers! > > But how is the support right now in OpenStack with centos/fedora > images regarding the auto resizing during boot? does the disk size set > in the flavor is respected or not, or does it work only with fedora > and newer kernels than what CentOS uses...things like that is what I'm > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > > wrote: On Mon, May > 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > Hello all, > > > > I would like to know what's the current state of auto resizing the > > root partition in current RDO Icehouse, more specifically, CentOS > > and Fedora images. > > > > I've read many versions of the story so I'm not really sure what > > works and what doesn't. > > > > For instance, I've read that currently, auto resizing of a CentOS > > 6.5 image for would require the filesystem to be ext3 and I've also > > read that auto resizing currently works only with kernels >= 3.8, so > > what's really the deal with this currently? > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot > > and cloud-initramfs-tools installed on the image or are there any > > other steps required for the auto resizing to work? > > > I personally find[1] virt-resize (which works the same way on any > images) very useful when I'd like to do resizing, as it works > consistent well. > > I just tried on a Fedora 20 qcow2 cloud image with these below four > commands and their complete output. > > 1. Examine the root filesystem size _inside_ the cloud image: > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > Name Type VFS Label MBR Size Parent /dev/sda1 > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - > - 83 1.9G /dev/sda /dev/sda device - - - > 2.0G - > > 2. Create a new qcow2 disk of 10G: > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > newdisk.qcow2 10G > > 3. Perform the resize operation: > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** > > Summary of changes: > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > filesystem ext4 on /dev/sda1 will be expanded using the > 'resize2fs' method. > > ********** Setting up initial partition table on newdisk.qcow2 ... > Copying /dev/sda1 ... 100% > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... > > Resize operation completed with no errors. Before deleting the > old disk, carefully check that the resized disk boots and works > correctly. > > 4. Examine the root file system size in the new disk (should reflect > correctly): > > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name > Type VFS Label MBR Size Parent /dev/sda1 filesystem > ext4 _/ - 10G - /dev/sda1 partition - - 83 > 10G /dev/sda /dev/sda device - - - 10G - > > > Hope that helps. > > > [1] > > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestf > s-tools/ > > > > -- /kashyap > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- /kashyap From elias.moreno.tec at gmail.com Sat May 10 16:58:45 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Sat, 10 May 2014 12:28:45 -0430 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: Hey, thanks! this method indeed worked nicely with CentOS 6.5 image in RDO Icehouse! :D I didn't do the puppet part since I've no puppet server to test but it wasn't needed, also I used virt-sparcify instead of step 13 qemu-image convert I also tried the oz-install method but it failed everytime with the following exception: "raise oz.OzException.OzException("No disk activity in %d seconds, failing. %s" % (inactivity_timeout, screenshot_text))" No matter the install type (url or iso) and didn't matter creating this in different machines with different specs (more ram, cpu, fast disks...) Anyhow, thank you all for the help and tips! very appreciated ;) Any chance to include this method in RDO docs? On Fri, May 9, 2014 at 8:34 AM, St. George, Allan L. < ALLAN.L.ST.GEORGE at leidos.com> wrote: > I'm sure someone could make this better, but this is what I've been using > and it works well: > > V/R, > > Allan > > 1. Create disk image with QCOW2 format > > qemu-img create -f qcow2 /tmp/centos-6.5-working.qcow2 10G > > 2. Install CentOS; Install onto a single ext4 partition mounted to ?/? (no > /boot, /swap, etc.) > > virt-install --virt-type {kvm or qemu} --name centos-6.5 --ram 1024 \ > --cdrom=/tmp/CentOS-6.5-x86_64-minimal.iso \ > --disk /tmp/centos-6.5-working.qcow2,format=qcow2 \ > --network network=default \ > --graphics vnc,listen=0.0.0.0 --noautoconsole \ > --os-type=linux --os-variant=rhel6 > > 3. Eject the disk and reboot the virtual machine > > virsh attach-disk --type cdrom --mode readonly centos-6.5 "" hdc > virsh destroy centos-6.5 > virsh start centos-6.5 > > 4. After reboot, login into your new image and modify > '/etc/sysconfig/network-scripts/ifcfg-eth0' to look like this > > DEVICE="eth0" > BOOTPROTO="dhcp" > NM_CONTROLLED="no" > ONBOOT="yes" > TYPE="Ethernet" > > 5. Add EPEL repository and update OS > > rpm -ivh > http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > rpm -ivh > https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm > > 6. Update yum and install cloud-init > > yum -y update > yum install cloud-utils cloud-init parted git > cd /tmp > git clone https://github.com/flegmatik/linux-rootfs-resize.git (installed > in place of cloud-initramfs-tools) > cd linux-rootfs-resize > ./install > > Edit /etc/cloud/cloud.cfg > > Add the line: > > user: ec2-user > Under ?cloud_init_modules?, add: > - resolv-conf > > 7. Install and configure puppet > > yum install puppet > edit /etc/hosts and add entry for foreman > edit /etc/puppet/puppet.conf and add the following lines: > > [main] > pluginsync = true > [agent] > runinterval=1800 > server = {server.domain} > chkconfig puppet on > > 8. Enable the instance to access the metadata service > > echo "NOZEROCONF=yes" >> /etc/sysconfig/network > > 9. Configure /etc/ssh/sshd_config > > Uncomment the following lines: > > PermitRootLogin yes > PasswordAuthentication yes > > 10. Power down your virtual Centos machine > > 11. Clean up the virtual machine of MAC address, etc. > > virt-sysprep -d centos-6.5 > > 12. Undefine the libvirt domain > > virsh undefine centos-6.5 > > 13. Compress QCOW2 image with > > qemu-img convert -c /tmp/centos-6.5-working.qcow2 -O qcow2 > /tmp/centos.qcow2 > > > Image /tmp/centos-6.5.qcow2 is now ready for upload to Openstack > > > -----Original Message----- > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > Sent: Wednesday, May 07, 2014 11:49 PM > To: St. George, Allan L. > Cc: rdo-list at redhat.com; El?as David > Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO > Icehouse > > On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: > > I haven?t had the time to work with Icehouse yet, but I have outlined > > instruction that are used to create Havana CentOS images that resize > > automatically upon spawning via linux-rootfs-resize. > > > > If interested, I?ll forward it along. > > That'd be useful. It'd be even better if you could make a quick RDO wiki > page[1] that'll be indexed by the search engines. > > > [1] http://openstack.redhat.com/ > > PS: If you're a Markdown user, you can convert Markdown -> WikiMedia (RDO > uses WikiMedia for wiki) trivially like this: > > $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki > > > > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: > > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] > > Automatic resizing of root partitions in RDO Icehouse > > > > > > Hi thanks for the answers! > > > > But how is the support right now in OpenStack with centos/fedora > > images regarding the auto resizing during boot? does the disk size set > > in the flavor is respected or not, or does it work only with fedora > > and newer kernels than what CentOS uses...things like that is what I'm > > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > > > wrote: On Mon, May > > 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > > Hello all, > > > > > > I would like to know what's the current state of auto resizing the > > > root partition in current RDO Icehouse, more specifically, CentOS > > > and Fedora images. > > > > > > I've read many versions of the story so I'm not really sure what > > > works and what doesn't. > > > > > > For instance, I've read that currently, auto resizing of a CentOS > > > 6.5 image for would require the filesystem to be ext3 and I've also > > > read that auto resizing currently works only with kernels >= 3.8, so > > > what's really the deal with this currently? > > > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot > > > and cloud-initramfs-tools installed on the image or are there any > > > other steps required for the auto resizing to work? > > > > > > I personally find[1] virt-resize (which works the same way on any > > images) very useful when I'd like to do resizing, as it works > > consistent well. > > > > I just tried on a Fedora 20 qcow2 cloud image with these below four > > commands and their complete output. > > > > 1. Examine the root filesystem size _inside_ the cloud image: > > > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > > > Name Type VFS Label MBR Size Parent /dev/sda1 > > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - > > - 83 1.9G /dev/sda /dev/sda device - - - > > 2.0G - > > > > 2. Create a new qcow2 disk of 10G: > > > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > > newdisk.qcow2 10G > > > > 3. Perform the resize operation: > > > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** > > > > Summary of changes: > > > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > > filesystem ext4 on /dev/sda1 will be expanded using the > > 'resize2fs' method. > > > > ********** Setting up initial partition table on newdisk.qcow2 ... > > Copying /dev/sda1 ... 100% > > > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... > > > > Resize operation completed with no errors. Before deleting the > > old disk, carefully check that the resized disk boots and works > > correctly. > > > > 4. Examine the root file system size in the new disk (should reflect > > correctly): > > > > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name > > Type VFS Label MBR Size Parent /dev/sda1 filesystem > > ext4 _/ - 10G - /dev/sda1 partition - - 83 > > 10G /dev/sda /dev/sda device - - - 10G - > > > > > > Hope that helps. > > > > > > [1] > > > > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestf > > s-tools/ > > > > > > > > -- /kashyap > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- > /kashyap > -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Sun May 11 05:24:52 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sun, 11 May 2014 01:24:52 -0400 (EDT) Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: <1896853142.3092578.1399785892825.JavaMail.zimbra@redhat.com> > Hey, thanks! this method indeed worked nicely with CentOS 6.5 image in RDO > Icehouse! :D > > I didn't do the puppet part since I've no puppet server to test but it > wasn't needed, also I used virt-sparcify instead of step 13 qemu-image > convert > > I also tried the oz-install method but it failed everytime with the > following exception: > > "raise oz.OzException.OzException("No disk activity in %d seconds, failing. > %s" % (inactivity_timeout, screenshot_text))" > > No matter the install type (url or iso) and didn't matter creating this in > different machines with different specs (more ram, cpu, fast disks...) I just did a simple test to create a guest via Oz on Fedora 20 and it just works. Here's my invocation details: TDL file: $ cat f20.tdl Invoke `oz-install`: $ oz-install -d 4 f20.tdl 2>&1 | tee f20.log Once the install is done, define the libvirt XML for the guest and start it: $ virsh define f20-jeos $ virsh start f20-jeos --console > > Anyhow, thank you all for the help and tips! very appreciated ;) > > Any chance to include this method in RDO docs? "It's a wiki, be bold" :-). You can trivially make docs once you login with your OpenID or other mechanisms listed on RDO wiki. /kashyap From elias.moreno.tec at gmail.com Sun May 11 23:13:50 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Sun, 11 May 2014 18:43:50 -0430 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: <1896853142.3092578.1399785892825.JavaMail.zimbra@redhat.com> References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> <1896853142.3092578.1399785892825.JavaMail.zimbra@redhat.com> Message-ID: Oh, no problem here either if I use the tdl only, using both a kickstart (-a) and a tdl is where the problem began, every step was slow, partitioning, package install, post-install scripts..., and so on until the error showed up. I'm pretty sure it was an error on my part, more likely in the kickstart. In the end, for the fedora images I settled using appliance-creator and got an image able to autogrow using one of the fedora 20 cloud kickstarts as base (kickstarts from here: https://git.fedorahosted.org/cgit/cloud-kickstarts.git/tree/generic) I really appreciate all your help! I'll be documenting later in the RDO wiki for future reference On Sun, May 11, 2014 at 12:54 AM, Kashyap Chamarthy wrote: > > Hey, thanks! this method indeed worked nicely with CentOS 6.5 image in > RDO > > Icehouse! :D > > > > I didn't do the puppet part since I've no puppet server to test but it > > wasn't needed, also I used virt-sparcify instead of step 13 qemu-image > > convert > > > > I also tried the oz-install method but it failed everytime with the > > following exception: > > > > "raise oz.OzException.OzException("No disk activity in %d seconds, > failing. > > %s" % (inactivity_timeout, screenshot_text))" > > > > No matter the install type (url or iso) and didn't matter creating this > in > > different machines with different specs (more ram, cpu, fast disks...) > > I just did a simple test to create a guest via Oz on Fedora 20 and it > just works. Here's my invocation details: > > TDL file: > > $ cat f20.tdl > > > > Invoke `oz-install`: > > $ oz-install -d 4 f20.tdl 2>&1 | tee f20.log > > > Once the install is done, define the libvirt XML for the guest and start > it: > > $ virsh define f20-jeos > $ virsh start f20-jeos --console > > > > > Anyhow, thank you all for the help and tips! very appreciated ;) > > > > Any chance to include this method in RDO docs? > > "It's a wiki, be bold" :-). You can trivially make docs once you > login with your OpenID or other mechanisms listed on RDO wiki. > > /kashyap > -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtaylor at redhat.com Mon May 12 13:08:41 2014 From: mtaylor at redhat.com (Martyn Taylor) Date: Mon, 12 May 2014 14:08:41 +0100 Subject: [Rdo-list] [OFI] Astapor, Foreman, Staypuft interaction Message-ID: <5370C7D9.5070006@redhat.com> All, I recently had some discussion about HA orchestration this morning with Petr Chalupa. Particular around the HA Controller node deployment. This particular role behaves slightly differently to the other roles in a Staypuft deployment in that it requires more than one puppet run to complete. Up to now we have worked on the assumption that once we have received a successful puppet run report in foreman, then the node associated with the role is configured and ready to go. We use this for scheduling the next list of nodes in a given deployment. We do have a work around for HA Controller issue described above in the astapor modules. Blocking is implemented in the subsequent puppet modules that are dependent on the HA Controller services. This means that any depdendent modules will wait until controller completes before proceeding. This results in the following behaviour. Sequence - Controller Nodes Provisioned. - First puppet run returns successful. - LVM Block Storage is provisioned. - Controller Node puppet run 2 completes - LVM Block storage puppet run completes. In this case, the LVM block storage is provisioned before the controllers are complete, but will block until the Controller puppet run 2 completes. This work around is sufficient for the time being. But really what we would like is to have Staypuft orchestrate the whole process, rather than it be partially orchestrated by the puppet modules, partially by Staypuft orchestration. The difficulty we have right now in Staypuft is that (with out knowing the specific implementation details of the puppet modules), there is no clear way to detect whether a node with role X is complete and we are able to schedule the next roles in the sequence. What we need here is a clear interface for determining status of puppet class and/or HostGroup status for the Astapor modules. I have 2 questions around this, 1. Does there currently exist anyway to consistently detect the status of a role/list of classes within Foreman for Astapor classes that we can utilize? -. If so can we do this without knowing the implementation details of the Astapor puppet modules? (We do not want to, for example, look for class specific facts in foreman, since these vary between classes and may change in Astapor)? 2. If not 1. Is is possible to add something to the puppet modules to explicitly show that a class/Hostgroup is complete? I am thinking something along the lines of reporting a "Ready" flag back to foreman. If none of the above, any other suggestions? Cheers Martyn From jhu_com at 163.com Mon May 12 13:16:17 2014 From: jhu_com at 163.com (HuJun) Date: Mon, 12 May 2014 21:16:17 +0800 Subject: [Rdo-list] keystone error when run packstack Message-ID: <5370C9A1.7080507@163.com> Hi Guys: I met a keystone error when run packstack on Fedora 20, It seems like parameters error. what will I do for escaping it? [root at cloudf ~]# cat /etc/fedora-release Fedora release 20 (Heisenbug) [root at cloudf ~]# uname -a Linux cloudf 3.14.2-200.fc20.x86_64 #1 SMP Mon Apr 28 14:40:57 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [root at cloudf ~]# packstack --answer-file=packstack-answers-20140512-202737.txt Welcome to Installer setup utility Installing: Clean Up... [ DONE ] Setting up ssh keys... [ DONE ] Discovering hosts' details... [ DONE ] Adding pre install manifest entries... [ DONE ] Adding MySQL manifest entries... [ DONE ] Adding QPID manifest entries... [ DONE ] Adding Keystone manifest entries... [ DONE ] Adding Glance Keystone manifest entries... [ DONE ] Adding Glance manifest entries... [ DONE ] Installing dependencies for Cinder... [ DONE ] Adding Cinder Keystone manifest entries... [ DONE ] Adding Cinder manifest entries... [ DONE ] Checking if the Cinder server has a cinder-volumes vg...[ DONE ] Adding Nova API manifest entries... [ DONE ] Adding Nova Keystone manifest entries... [ DONE ] Adding Nova Cert manifest entries... [ DONE ] Adding Nova Conductor manifest entries... [ DONE ] Adding Nova Compute manifest entries... [ DONE ] Adding Nova Scheduler manifest entries... [ DONE ] Adding Nova VNC Proxy manifest entries... [ DONE ] Adding Nova Common manifest entries... [ DONE ] Adding Openstack Network-related Nova manifest entries...[ DONE ] Adding Neutron API manifest entries... [ DONE ] Adding Neutron Keystone manifest entries... [ DONE ] Adding Neutron L3 manifest entries... [ DONE ] Adding Neutron L2 Agent manifest entries... [ DONE ] Adding Neutron DHCP Agent manifest entries... [ DONE ] Adding Neutron LBaaS Agent manifest entries... [ DONE ] Adding Neutron Metadata Agent manifest entries... [ DONE ] Adding OpenStack Client manifest entries... [ DONE ] Adding Horizon manifest entries... [ DONE ] Adding Swift Keystone manifest entries... [ DONE ] Adding Swift builder manifest entries... [ DONE ] Adding Swift proxy manifest entries... [ DONE ] Adding Swift storage manifest entries... [ DONE ] Adding Swift common manifest entries... [ DONE ] Adding Provisioning manifest entries... [ DONE ] Adding Ceilometer manifest entries... [ DONE ] Adding Ceilometer Keystone manifest entries... [ DONE ] Adding Nagios server manifest entries... [ DONE ] Adding Nagios host manifest entries... [ DONE ] Adding post install manifest entries... [ DONE ] Preparing servers... [ DONE ] Installing Dependencies... [ DONE ] Copying Puppet modules and manifests... [ DONE ] Applying Puppet manifests... Applying 192.168.0.101_prescript.pp 192.168.0.101_prescript.pp : [ DONE ] Applying 192.168.0.101_mysql.pp Applying 192.168.0.101_qpid.pp 192.168.0.101_mysql.pp : [ DONE ] 192.168.0.101_qpid.pp : [ DONE ] Applying 192.168.0.101_keystone.pp Applying 192.168.0.101_glance.pp Applying 192.168.0.101_cinder.pp [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.0.101_keystone.pp Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Execution of '/usr/bin/keystone --endpoint http://127.0.0.1:35357/v2.0/ role-list' returned 2: usage: keystone [--version] [--timeout ] You will find full trace in log /var/tmp/packstack/20140512-210500-q42ayh/manifests/192.168.0.101_keystone.pp.log Please check log file /var/tmp/packstack/20140512-210500-q42ayh/openstack-setup.log for more information Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * Did not create a cinder volume group, one already existed * File /root/keystonerc_admin has been created on OpenStack client host 192.168.0.101. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.0.101/dashboard. Please, find your login credentials stored in the keystonerc_admin in your home directory. * To use Nagios, browse to http://192.168.0.101/nagios username : nagiosadmin, password : 971d4caec4534007 -- -------------------- Jun Hu mobile:186 8035 6499 Tel :0755-8282 2635 email :jhu at novell.com jhu_com at 163.com Suse, China ---------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jguiditt at redhat.com Mon May 12 13:30:31 2014 From: jguiditt at redhat.com (Jason Guiditta) Date: Mon, 12 May 2014 09:30:31 -0400 Subject: [Rdo-list] [OFI] Astapor, Foreman, Staypuft interaction In-Reply-To: <5370C7D9.5070006@redhat.com> References: <5370C7D9.5070006@redhat.com> Message-ID: <20140512133031.GB3693@redhat.com> On 12/05/14 14:08 +0100, Martyn Taylor wrote: >All, > >I recently had some discussion about HA orchestration this morning >with Petr Chalupa. Particular around the HA Controller node >deployment. This particular role behaves slightly differently to the >other roles in a Staypuft deployment in that it requires more than one >puppet run to complete. > >Up to now we have worked on the assumption that once we have received >a successful puppet run report in foreman, then the node associated >with the role is configured and ready to go. We use this for >scheduling the next list of nodes in a given deployment. > >We do have a work around for HA Controller issue described above in >the astapor modules. Blocking is implemented in the subsequent puppet >modules that are dependent on the HA Controller services. This means >that any depdendent modules will wait until controller completes >before proceeding. This results in the following behaviour. > >Sequence >- Controller Nodes Provisioned. >- First puppet run returns successful. >- LVM Block Storage is provisioned. >- Controller Node puppet run 2 completes >- LVM Block storage puppet run completes. > >In this case, the LVM block storage is provisioned before the >controllers are complete, but will block until the Controller puppet >run 2 completes. > >This work around is sufficient for the time being. But really what we >would like is to have Staypuft orchestrate the whole process, rather >than it be partially orchestrated by the puppet modules, partially by >Staypuft orchestration. > >The difficulty we have right now in Staypuft is that (with out knowing >the specific implementation details of the puppet modules), there is >no clear way to detect whether a node with role X is complete and we >are able to schedule the next roles in the sequence. > >What we need here is a clear interface for determining status of >puppet class and/or HostGroup status for the Astapor modules. > >I have 2 questions around this, > >1. Does there currently exist anyway to consistently detect the >status of a role/list of classes within Foreman for Astapor classes >that we can utilize? > -. If so can we do this without knowing the implementation details >of the Astapor puppet modules? (We do not want to, for example, look >for class specific facts in foreman, since these vary between classes >and may change in Astapor)? > >2. If not 1. Is is possible to add something to the puppet modules >to explicitly show that a class/Hostgroup is complete? I am thinking >something along the lines of reporting a "Ready" flag back to foreman. > I'll have to think about it more, but we already have a fact similar to this that we use in quickstack for determining if ha-mysql is ready, so we can decide whether to do certain other steps. Crag had some concern that we were seeing an odd behavior with puppet agent running as a service though, not sure if he and Petr looked at it friday or not. In case they did not, his theory was that the puppet facts from the node were not getting updated correctly between agent runs when the agent was not a service. It seemed that the node was reporting in and the next run still did not have the new value for the fact (so in this case, the second run should show ha_mysql_ready=true or similar). The fact was correct when puppet agent was run in the foreground for each run, so I believe the thought was that when agent ran as a service, facts were being cached and not updated. I am unsure if this has yet been either proved or disproved, just mentioning it in case it is a real issue. Anyway, if that were _not_ an issue, it would be simple enough to add a controller_ready fact or similar to quickstack. I am still not sure if this is the best approach, but it is definitely feasible, we have all the information available to us to report back such a thing. -j >If none of the above, any other suggestions? > >Cheers >Martyn From lars at redhat.com Mon May 12 13:50:23 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 12 May 2014 09:50:23 -0400 Subject: [Rdo-list] keystone error when run packstack In-Reply-To: <5370C9A1.7080507@163.com> References: <5370C9A1.7080507@163.com> Message-ID: <20140512135023.GA4056@redhat.com> On Mon, May 12, 2014 at 09:16:17PM +0800, HuJun wrote: > I met a keystone error when run packstack on Fedora 20, It seems like > parameters error. Can you confirm which version of "openstack-packstack" and "python-keystoneclient" are installed on your system? rpm -q openstack-packstack python-keystoneclient Thanks, -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jhu_com at 163.com Mon May 12 14:05:18 2014 From: jhu_com at 163.com (HuJun) Date: Mon, 12 May 2014 22:05:18 +0800 Subject: [Rdo-list] keystone error when run packstack In-Reply-To: <20140512135023.GA4056@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com> Message-ID: <5370D51E.5010206@163.com> [root at cloudf ~]# rpm -q openstack-packstack python-keystoneclient openstack-packstack-2013.2.1-0.29.dev956.fc20.noarch python-keystoneclient-0.7.1-2.fc20.noarch On 12/05/14 21:50, Lars Kellogg-Stedman wrote: > On Mon, May 12, 2014 at 09:16:17PM +0800, HuJun wrote: >> I met a keystone error when run packstack on Fedora 20, It seems like >> parameters error. > Can you confirm which version of "openstack-packstack" and > "python-keystoneclient" are installed on your system? > > rpm -q openstack-packstack python-keystoneclient > > Thanks, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Mon May 12 14:37:34 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 12 May 2014 10:37:34 -0400 Subject: [Rdo-list] keystone error when run packstack In-Reply-To: <5370D51E.5010206@163.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com> <5370D51E.5010206@163.com> Message-ID: <20140512143734.GC4056@redhat.com> On Mon, May 12, 2014 at 10:05:18PM +0800, HuJun wrote: > [root at cloudf ~]# rpm -q openstack-packstack python-keystoneclient > openstack-packstack-2013.2.1-0.29.dev956.fc20.noarch > python-keystoneclient-0.7.1-2.fc20.noarch That looks like the version of packstack that's currently in F20, not the one from the RDO repositories. If you want to install RDO, start here: http://openstack.redhat.com/Quickstart There you'll find instructions for enabling the RDO repositories. The version of packstack in the current (Icehouse) RDO repository is: # rpm -q openstack-packstack openstack-packstack-2014.1.1-0.9.dev1055.fc21.noarch If you're looking to install RDO Havana...you may need to wait a bit, because it looks like the repository configs from the RDO Havana "rdo-release" package are currently broken. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From pbrady at redhat.com Mon May 12 16:43:56 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 12 May 2014 17:43:56 +0100 Subject: [Rdo-list] keystone error when run packstack In-Reply-To: <20140512143734.GC4056@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com> <5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com> Message-ID: <5370FA4C.4080109@redhat.com> On 05/12/2014 03:37 PM, Lars Kellogg-Stedman wrote: > On Mon, May 12, 2014 at 10:05:18PM +0800, HuJun wrote: >> [root at cloudf ~]# rpm -q openstack-packstack python-keystoneclient >> openstack-packstack-2013.2.1-0.29.dev956.fc20.noarch >> python-keystoneclient-0.7.1-2.fc20.noarch > > That looks like the version of packstack that's currently in F20, not > the one from the RDO repositories. If you want to install RDO, start > here: > > http://openstack.redhat.com/Quickstart > > There you'll find instructions for enabling the RDO repositories. The > version of packstack in the current (Icehouse) RDO repository is: > > # rpm -q openstack-packstack > openstack-packstack-2014.1.1-0.9.dev1055.fc21.noarch > > If you're looking to install RDO Havana...you may need to wait a bit, > because it looks like the repository configs from the RDO Havana > "rdo-release" package are currently broken. I think you're referring to the fact that you can't install the RDO Havana rdo-release.rpm on Fedora 20. This is expected and documented, as Havana for Fedora 20 should currently be consumed from the official Fedora repositories. thanks, P?draig. From bderzhavets at hotmail.com Mon May 12 17:02:56 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 12 May 2014 13:02:56 -0400 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ? In-Reply-To: <5370FA4C.4080109@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com>,<5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com>,<5370FA4C.4080109@redhat.com> Message-ID: If currently Controller && Compute nodes are running Havana on F20 boxes should I expect future `yum update` to make painless upgrade to IceHouse release or system will be broken or frozen ? Current status on Controller:- [root at dfw02 boris(keystone_admin)]$ rpm -qa | grep openstack openstack-nova-api-2013.2.3-1.fc20.noarch openstack-keystone-2013.2.3-2.fc20.noarch openstack-glance-2013.2.1-1.fc20.noarch openstack-cinder-2013.2.3-1.fc20.noarch openstack-nova-compute-2013.2.3-1.fc20.noarch openstack-nova-conductor-2013.2.3-1.fc20.noarch openstack-nova-common-2013.2.3-1.fc20.noarch openstack-nova-cells-2013.2.3-1.fc20.noarch openstack-nova-2013.2.3-1.fc20.noarch openstack-nova-network-2013.2.3-1.fc20.noarch python-django-openstack-auth-1.1.5-1.fc20.noarch openstack-dashboard-theme-2014.1-1.fc20.noarch openstack-utils-2013.2-2.fc20.noarch openstack-nova-console-2013.2.3-1.fc20.noarch openstack-ceilometer-compute-2013.2.3-1.fc20.noarch openstack-nova-objectstore-2013.2.3-1.fc20.noarch openstack-nova-scheduler-2013.2.3-1.fc20.noarch openstack-neutron-openvswitch-2013.2.2-2.fc20.noarch openstack-dashboard-2014.1-1.fc20.noarch openstack-nova-cert-2013.2.3-1.fc20.noarch openstack-nova-novncproxy-2013.2.3-1.fc20.noarch openstack-ceilometer-common-2013.2.3-1.fc20.noarch openstack-neutron-2013.2.2-2.fc20.noarch >This is expected and documented, > as Havana for Fedora 20 should currently be consumed from the > official Fedora repositories. > > thanks, > P?draig. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Mon May 12 18:03:33 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Mon, 12 May 2014 19:03:33 +0100 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ? In-Reply-To: References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com>, <5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com>, <5370FA4C.4080109@redhat.com> Message-ID: <53710CF5.9090608@redhat.com> On 05/12/2014 06:02 PM, Boris Derzhavets wrote: > If currently Controller && Compute nodes are running Havana on F20 boxes > should I expect future `yum update` to make painless upgrade to IceHouse > release or system will be broken or frozen ? > Current status on Controller:- > > [root at dfw02 boris(keystone_admin)]$ rpm -qa | grep openstack > openstack-nova-api-2013.2.3-1.fc20.noarch ... > openstack-dashboard-theme-2014.1-1.fc20.noarch > openstack-dashboard-2014.1-1.fc20.noarch I see you've backported Icehouse horizon to avoid the Firefox 29 issue. Anyway, the update between major releases of OpenStack requires an explicit action, which is to install the appropriate RDO release rpm. For details on the H -> I update process see: http://openstack.redhat.com/Upgrading_RDO_To_Icehouse thanks, P?draig. From bderzhavets at hotmail.com Mon May 12 18:26:24 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 12 May 2014 14:26:24 -0400 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ? In-Reply-To: <53710CF5.9090608@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com>,<5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com>,<5370FA4C.4080109@redhat.com> , <53710CF5.9090608@redhat.com> Message-ID: Please, view this question at ask.openstack.org :- https://ask.openstack.org/en/question/29460/attempt-to-upgrade-aio-havana-to-icehouse-on-centos-65-vm-on-libvirts-subnet/ > For details on the H -> I update process see: > http://openstack.redhat.com/Upgrading_RDO_To_Icehouse > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Mon May 12 18:40:51 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 12 May 2014 14:40:51 -0400 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ? (2) In-Reply-To: <53710CF5.9090608@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com>,<5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com>,<5370FA4C.4080109@redhat.com> , <53710CF5.9090608@redhat.com> Message-ID: Sorry for repeat ( I fixed the link for easy use). Please, view this question at ask.openstack.org :- https://ask.openstack.org/en/question/29460/attempt-to-upgrade-aio-havana-to-icehouse-on-centos-65-vm-on-libvirts-subnet/ > Anyway, the update between major releases of OpenStack > requires an explicit action, which is to install the > appropriate RDO release rpm. > > For details on the H -> I update process see: > http://openstack.redhat.com/Upgrading_RDO_To_Icehouse > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.sutcliffe at gmail.com Mon May 12 20:20:40 2014 From: greg.sutcliffe at gmail.com (Greg Sutcliffe) Date: Mon, 12 May 2014 21:20:40 +0100 Subject: [Rdo-list] [foreman-dev] [OFI] Astapor, Foreman, Staypuft interaction In-Reply-To: <5370C7D9.5070006@redhat.com> References: <5370C7D9.5070006@redhat.com> Message-ID: On 12 May 2014 14:08, Martyn Taylor wrote: > 1. Does there currently exist anyway to consistently detect the status of a > role/list of classes within Foreman for Astapor classes that we can utilize? > -. If so can we do this without knowing the implementation details of the > Astapor puppet modules? (We do not want to, for example, look for class > specific facts in foreman, since these vary between classes and may change > in Astapor)? This may be a single question, but it can be interpreted two ways, and thus has two answers. If you mean "Does Puppet report the status of a class" the answer is yes. A given puppet report will either contain active changes, or report "no changes". If you have the latter, you know all your classes have been checked, and have been found to be in a consistent state. If you get the former, then you have lines of the form "Stage[main]/Class::Subclass/File[/etc/foo]: blah" containing the class name - again, if there are no lines matching your class, it must be consistent. But all of this is based on the last report - it's not live. If you mean "is there an API I can query for the state of a class" then the answer is no - you can only query the reports. > 2. If not 1. Is is possible to add something to the puppet modules to > explicitly show that a class/Hostgroup is complete? I am thinking something > along the lines of reporting a "Ready" flag back to foreman. Potentially, yes. If the querying of reports above is not suitable, then you have three further options, as I see it. The first two involve the fact that Foreman's report API is pure JSON - reports don't have to come from Puppet. So you could either (a) have a chained Exec resource which fires a report in, or (b) you could add a custom report processor to the deployed puppetmaster that looks for specific things in the data recieved from the client and then sends them to Foreman Third, if you can't make your data fit the report format in Foreman, or just want to hit a simpler URL, then your plugin could easily add a new route to the Foreman API which either the Exec or report processort from (a) or (b) could hit instead of uploading a report. That might be simpler in the short term, but be careful of security when implementing new API endpoints. HTH, Greg From bderzhavets at hotmail.com Mon May 12 20:29:43 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 12 May 2014 16:29:43 -0400 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ?(3) In-Reply-To: <53710CF5.9090608@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com>,<5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com>,<5370FA4C.4080109@redhat.com> , <53710CF5.9090608@redhat.com> Message-ID: https://ask.openstack.org/en/question/28659/failure-login-to-dashboard-on-f20-havana-controller-after-recent-firefox-update-up-to-290-5/ This post was downgraded 12 hr ago. I did backport F21 packages to F20, to keep Horizon alive, nothing else. > For details on the H -> I update process see: > http://openstack.redhat.com/Upgrading_RDO_To_Icehouse > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhu_com at 163.com Tue May 13 06:46:51 2014 From: jhu_com at 163.com (HuJun) Date: Tue, 13 May 2014 14:46:51 +0800 Subject: [Rdo-list] keystone error when run packstack In-Reply-To: <20140512143734.GC4056@redhat.com> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com> <5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com> Message-ID: <5371BFDB.5080706@163.com> hi Lars: I used openstack-packstack-2014.1.1-0.9.dev1055.fc21.noarch, and found a new issue. [root at ostack ~]# packstack --answer-file=packstack-answers-20140513-012210.txt Welcome to Installer setup utility Installing: Clean Up [ DONE ] Setting up ssh keys [ DONE ] Discovering hosts' details [ DONE ] Adding pre install manifest entries [ DONE ] Adding MySQL manifest entries [ DONE ] Adding AMQP manifest entries [ DONE ] Adding Keystone manifest entries [ DONE ] Adding Glance Keystone manifest entries [ DONE ] Adding Glance manifest entries [ DONE ] Installing dependencies for Cinder [ DONE ] Adding Cinder Keystone manifest entries [ DONE ] Adding Cinder manifest entries [ DONE ] Checking if the Cinder server has a cinder-volumes vg[ DONE ] Adding Nova API manifest entries [ DONE ] Adding Nova Keystone manifest entries [ DONE ] Adding Nova Cert manifest entries [ DONE ] Adding Nova Conductor manifest entries [ DONE ] Adding Nova Compute manifest entries [ DONE ] Adding Nova Scheduler manifest entries [ DONE ] Adding Nova VNC Proxy manifest entries [ DONE ] Adding Nova Common manifest entries [ DONE ] Adding Openstack Network-related Nova manifest entries[ DONE ] Adding Neutron API manifest entries [ DONE ] Adding Neutron Keystone manifest entries [ DONE ] Adding Neutron L3 manifest entries [ DONE ] Adding Neutron L2 Agent manifest entries [ DONE ] Adding Neutron DHCP Agent manifest entries [ DONE ] Adding Neutron LBaaS Agent manifest entries [ DONE ] Adding Neutron Metadata Agent manifest entries [ DONE ] Adding OpenStack Client manifest entries [ DONE ] Adding Horizon manifest entries [ DONE ] Adding Provisioning manifest entries [ DONE ] Adding MongoDB manifest entries [ DONE ] Adding Ceilometer manifest entries [ DONE ] Adding Ceilometer Keystone manifest entries [ DONE ] Adding Nagios server manifest entries [ DONE ] Adding Nagios host manifest entries [ DONE ] Adding post install manifest entries [ DONE ] Preparing servers [ DONE ] Installing Dependencies [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 147.2.147.82_prescript.pp 147.2.147.82_prescript.pp: [ DONE ] Applying 147.2.147.82_mysql.pp Applying 147.2.147.82_amqp.pp 147.2.147.82_mysql.pp: [ DONE ] 147.2.147.82_amqp.pp: [ DONE ] Applying 147.2.147.82_keystone.pp Applying 147.2.147.82_glance.pp Applying 147.2.147.82_cinder.pp 147.2.147.82_keystone.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 147.2.147.82_keystone.pp Error: /Stage[main]/Neutron::Keystone::Auth/Keystone_user[neutron]: Could not evaluate: Execution of '/usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get' returned 1: The request you have made requires authentication. (HTTP 401) You will find full trace in log /var/tmp/packstack/20140513-013406-rpdBVm/manifests/147.2.147.82_keystone.pp.log Please check log file /var/tmp/packstack/20140513-013406-rpdBVm/openstack-setup.log for more information Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * Did not create a cinder volume group, one already existed * File /root/keystonerc_admin has been created on OpenStack client host 147.2.147.82. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://147.2.147.82/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * To use Nagios, browse to http://147.2.147.82/nagios username : nagiosadmin, password : 0c1a53e87c664791 [root at ostack ~]# yum repolist Loaded plugins: fastestmirror, langpacks, priorities, refresh-packagekit Loading mirror speeds from cached hostfile * fedora: mirrors.vinahost.vn * updates: mirrors.vinahost.vn 199 packages excluded due to repository priority protections repo id repo name status fedora/20/x86_64 Fedora 20 - x86_64 38,493+104 openstack-icehouse/20 OpenStack Icehouse Repository 709+175 puppetlabs-deps/20/x86_64 Puppet Labs Dependencies - x86_64 5 puppetlabs-products/20/x86_64 Puppet Labs Products - x86_64 61 updates/20/x86_64 Fedora 20 - x86_64 - Updates 16,329+95 repolist: 55,597 [root at ostack ~]# rpm -qa | grep openstack openstack-packstack-puppet-2014.1.1-0.9.dev1055.fc21.noarch openstack-keystone-2014.1-2.fc21.noarch openstack-glance-2014.1-2.fc21.noarch openstack-puppet-modules-2014.1-9.1.fc21.noarch openstack-utils-2014.1-1.fc21.noarch openstack-packstack-2014.1.1-0.9.dev1055.fc21.noarch On 12/05/14 22:37, Lars Kellogg-Stedman wrote: > On Mon, May 12, 2014 at 10:05:18PM +0800, HuJun wrote: >> [root at cloudf ~]# rpm -q openstack-packstack python-keystoneclient >> openstack-packstack-2013.2.1-0.29.dev956.fc20.noarch >> python-keystoneclient-0.7.1-2.fc20.noarch > That looks like the version of packstack that's currently in F20, not > the one from the RDO repositories. If you want to install RDO, start > here: > > http://openstack.redhat.com/Quickstart > > There you'll find instructions for enabling the RDO repositories. The > version of packstack in the current (Icehouse) RDO repository is: > > # rpm -q openstack-packstack > openstack-packstack-2014.1.1-0.9.dev1055.fc21.noarch > > If you're looking to install RDO Havana...you may need to wait a bit, > because it looks like the repository configs from the RDO Havana > "rdo-release" package are currently broken. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.barba at gmail.com Tue May 13 10:54:22 2014 From: victor.barba at gmail.com (Victor Barba) Date: Tue, 13 May 2014 12:54:22 +0200 Subject: [Rdo-list] enable dhcp on public network Message-ID: Hi, This is my first post. Then forgive me if this is off-topic for this list and ignore it :) I need to assign public ips directly to my instances (not using floating ips). The packstack installation out-of-the-box do not enable dhcp on the public_net and then the ips are not assigned to the instances. How could I solve this? To be clear I need this: --------- eth0 (192.168.66.1) | | (br0 - 192.168.55.1) ----------------- VM (192.168.55.2) VM (192.168.55.3) VM get ip by dhcp and gw is 192.168.55.1 eth0 and br0 have ip_forwarding enabled. Thank you in advance. Regards, Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue May 13 11:42:47 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 13 May 2014 17:12:47 +0530 Subject: [Rdo-list] Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" Message-ID: <20140513114247.GA15336@tesla.pnq.redhat.com> Setup: A 2-node install (in virtual machines w/ nested virt) with IceHouse (Neutron w/ ML2+OVS+GRE) on Fedora 20, but OpenStack IceHouse packages are from Rawhide (Version details below). Problem ------- Attempt to launch a Nova instance as a user tenant results in this trace back saying "Unexpected vif_type". Interesting thing is, the instance goes into ACTIVE when I launch the Nova instance with admin tenant. 2014-05-13 07:06:32.123 29455 ERROR nova.compute.manager [req-402f21c1-98ed-4600-96b9-84efdb9c823d cb68d099e78d490ab0adf4030881153b 0a6eb2259ca142e7a80541db10835e71] [instance: 950de10f-4368-4498-b46a-b1595d057e 38] Error: Unexpected vif_type=binding_failed 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] Traceback (most recent call last): 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in _build_instance 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] set_access_ip=set_access_ip) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in decorated_function 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] return function(self, context, *args, **kwargs) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] LOG.exception(_('Instance failed to spawn'), instance=instance) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] six.reraise(self.type_, self.value, self.tb) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] block_device_info) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2250, in spawn 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] write_to_disk=True) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3431, in to_xml 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] disk_info, rescue, block_device_info) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3247, in get_guest_config 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] flavor) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 384, in get_config 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] _("Unexpected vif_type=%s") % vif_type) 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] NovaException: Unexpected vif_type=binding_failed 2014-05-13 07:06:32.123 29455 TRACE nova.compute.manager [instance: 950de10f-4368-4498-b46a-b1595d057e38] 2014-05-13 07:06:32.846 29455 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected vif_type=binding_failed Notes/Observations/diagnostics ------------------------------ - I can reach the inter-webs from the router namespace, but not DHCP namespace. - In nova.conf, for 'libvirt_vif_driver', I tried (a) both the below options, separately, , also I tried commenting it out, an upstream Nova commit[1] from 4APR2014 marks it as deprecated. libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver - Some diagnostics are here[2] - From some debugging (and from the diagnoistics above), I guess a br-tun is missing Related ------- I see a related Neutron bug[3], but that's not the root cause of this bug. Versions -------- Nova, Neutron, libvirt, QEMU, OpenvSwitch versions: openstack-nova-compute-2014.1-2.fc21.noarch openstack-neutron-2014.1-11.fc21.noarch libvirt-daemon-kvm-1.1.3.5-1.fc20.x86_64 qemu-system-x86-1.6.2-4.fc20.x86_64 openvswitch-2.0.1-1.fc20.x86_64 [1] https://git.openstack.org/cgit/openstack/nova/commit/?id=9f6070e194504cc2ca2b7f2a2aabbf91c6b81897 [2] https://gist.github.com/kashyapc/0d4869796c7ea79bfb89 [3] https://bugs.launchpad.net/neutron/+bug/1244255 nova.conf and ml2_conf.ini -------------------------- nova.conf: $ cat /etc/nova/nova.conf | grep -v ^$ | grep -v ^# [DEFAULT] logdir = /var/log/nova state_path = /var/lib/nova lock_path = /var/lib/nova/tmp volumes_dir = /etc/nova/volumes dhcpbridge = /usr/bin/nova-dhcpbridge dhcpbridge_flagfile = /etc/nova/nova.conf force_dhcp_release = True injected_network_template = /usr/share/nova/interfaces.template libvirt_nonblocking = True libvirt_use_virtio_for_bridges=True libvirt_inject_partition = -1 #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver #libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver #iscsi_helper = tgtadm sql_connection = mysql://nova:nova at 192.169.142.97/nova compute_driver = libvirt.LibvirtDriver libvirt_type=qemu rootwrap_config = /etc/nova/rootwrap.conf auth_strategy = keystone firewall_driver=nova.virt.firewall.NoopFirewallDriver enabled_apis = ec2,osapi_compute,metadata my_ip=192.169.142.168 network_api_class = nova.network.neutronv2.api.API neutron_url = http://192.169.142.97:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = services neutron_admin_username = neutron neutron_admin_password = fedora neutron_admin_auth_url = http://192.169.142.97:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron rpc_backend = nova.rpc.impl_kombu rabbit_host = 192.169.142.97 rabbit_port = 5672 rabbit_userid = guest rabbit_password = fedora glance_host = 192.169.142.97 [keystone_authtoken] auth_uri = http://192.169.142.97:5000 admin_tenant_name = services admin_user = nova admin_password = fedora auth_host = 192.169.142.97 auth_port = 35357 auth_protocol = http signing_dirname = /tmp/keystone-signing-nova ml2 plugin: $ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^# [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_flat] [ml2_type_vlan] [ml2_type_gre] tunnel_id_ranges = 1:1000 [ml2_type_vxlan] [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True What am I missing? I'm still investigating by playing with these config settings enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun bridge_mappings = ens2:br-ex in ml2_conf.ini -- /kashyap From ALLAN.L.ST.GEORGE at leidos.com Tue May 13 13:17:25 2014 From: ALLAN.L.ST.GEORGE at leidos.com (St. George, Allan L.) Date: Tue, 13 May 2014 13:17:25 +0000 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: Great, I?m glad it helped. I wanted my spawn to automatically join/report to foreman, which is why I included it on my image. I?m not familiar with RDO docs, but I wouldn?t have any problem with the document being posted. V/R, Allan From: El?as David [mailto:elias.moreno.tec at gmail.com] Sent: Saturday, May 10, 2014 12:59 PM To: St. George, Allan L. Cc: Kashyap Chamarthy; rdo-list at redhat.com Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse Hey, thanks! this method indeed worked nicely with CentOS 6.5 image in RDO Icehouse! :D I didn't do the puppet part since I've no puppet server to test but it wasn't needed, also I used virt-sparcify instead of step 13 qemu-image convert I also tried the oz-install method but it failed everytime with the following exception: "raise oz.OzException.OzException("No disk activity in %d seconds, failing. %s" % (inactivity_timeout, screenshot_text))" No matter the install type (url or iso) and didn't matter creating this in different machines with different specs (more ram, cpu, fast disks...) Anyhow, thank you all for the help and tips! very appreciated ;) Any chance to include this method in RDO docs? On Fri, May 9, 2014 at 8:34 AM, St. George, Allan L. > wrote: I'm sure someone could make this better, but this is what I've been using and it works well: V/R, Allan 1. Create disk image with QCOW2 format qemu-img create -f qcow2 /tmp/centos-6.5-working.qcow2 10G 2. Install CentOS; Install onto a single ext4 partition mounted to ?/? (no /boot, /swap, etc.) virt-install --virt-type {kvm or qemu} --name centos-6.5 --ram 1024 \ --cdrom=/tmp/CentOS-6.5-x86_64-minimal.iso \ --disk /tmp/centos-6.5-working.qcow2,format=qcow2 \ --network network=default \ --graphics vnc,listen=0.0.0.0 --noautoconsole \ --os-type=linux --os-variant=rhel6 3. Eject the disk and reboot the virtual machine virsh attach-disk --type cdrom --mode readonly centos-6.5 "" hdc virsh destroy centos-6.5 virsh start centos-6.5 4. After reboot, login into your new image and modify '/etc/sysconfig/network-scripts/ifcfg-eth0' to look like this DEVICE="eth0" BOOTPROTO="dhcp" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" 5. Add EPEL repository and update OS rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm rpm -ivh https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm 6. Update yum and install cloud-init yum -y update yum install cloud-utils cloud-init parted git cd /tmp git clone https://github.com/flegmatik/linux-rootfs-resize.git (installed in place of cloud-initramfs-tools) cd linux-rootfs-resize ./install Edit /etc/cloud/cloud.cfg Add the line: user: ec2-user Under ?cloud_init_modules?, add: - resolv-conf 7. Install and configure puppet yum install puppet edit /etc/hosts and add entry for foreman edit /etc/puppet/puppet.conf and add the following lines: [main] pluginsync = true [agent] runinterval=1800 server = {server.domain} chkconfig puppet on 8. Enable the instance to access the metadata service echo "NOZEROCONF=yes" >> /etc/sysconfig/network 9. Configure /etc/ssh/sshd_config Uncomment the following lines: PermitRootLogin yes PasswordAuthentication yes 10. Power down your virtual Centos machine 11. Clean up the virtual machine of MAC address, etc. virt-sysprep -d centos-6.5 12. Undefine the libvirt domain virsh undefine centos-6.5 13. Compress QCOW2 image with qemu-img convert -c /tmp/centos-6.5-working.qcow2 -O qcow2 /tmp/centos.qcow2 Image /tmp/centos-6.5.qcow2 is now ready for upload to Openstack -----Original Message----- From: Kashyap Chamarthy [mailto:kchamart at redhat.com] Sent: Wednesday, May 07, 2014 11:49 PM To: St. George, Allan L. Cc: rdo-list at redhat.com; El?as David Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: > I haven?t had the time to work with Icehouse yet, but I have outlined > instruction that are used to create Havana CentOS images that resize > automatically upon spawning via linux-rootfs-resize. > > If interested, I?ll forward it along. That'd be useful. It'd be even better if you could make a quick RDO wiki page[1] that'll be indexed by the search engines. [1] http://openstack.redhat.com/ PS: If you're a Markdown user, you can convert Markdown -> WikiMedia (RDO uses WikiMedia for wiki) trivially like this: $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] > Automatic resizing of root partitions in RDO Icehouse > > > Hi thanks for the answers! > > But how is the support right now in OpenStack with centos/fedora > images regarding the auto resizing during boot? does the disk size set > in the flavor is respected or not, or does it work only with fedora > and newer kernels than what CentOS uses...things like that is what I'm > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > >> wrote: On Mon, May > 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > Hello all, > > > > I would like to know what's the current state of auto resizing the > > root partition in current RDO Icehouse, more specifically, CentOS > > and Fedora images. > > > > I've read many versions of the story so I'm not really sure what > > works and what doesn't. > > > > For instance, I've read that currently, auto resizing of a CentOS > > 6.5 image for would require the filesystem to be ext3 and I've also > > read that auto resizing currently works only with kernels >= 3.8, so > > what's really the deal with this currently? > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot > > and cloud-initramfs-tools installed on the image or are there any > > other steps required for the auto resizing to work? > > > I personally find[1] virt-resize (which works the same way on any > images) very useful when I'd like to do resizing, as it works > consistent well. > > I just tried on a Fedora 20 qcow2 cloud image with these below four > commands and their complete output. > > 1. Examine the root filesystem size _inside_ the cloud image: > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > Name Type VFS Label MBR Size Parent /dev/sda1 > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - > - 83 1.9G /dev/sda /dev/sda device - - - > 2.0G - > > 2. Create a new qcow2 disk of 10G: > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > newdisk.qcow2 10G > > 3. Perform the resize operation: > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** > > Summary of changes: > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > filesystem ext4 on /dev/sda1 will be expanded using the > 'resize2fs' method. > > ********** Setting up initial partition table on newdisk.qcow2 ... > Copying /dev/sda1 ... 100% > ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... > > Resize operation completed with no errors. Before deleting the > old disk, carefully check that the resized disk boots and works > correctly. > > 4. Examine the root file system size in the new disk (should reflect > correctly): > > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name > Type VFS Label MBR Size Parent /dev/sda1 filesystem > ext4 _/ - 10G - /dev/sda1 partition - - 83 > 10G /dev/sda /dev/sda device - - - 10G - > > > Hope that helps. > > > [1] > > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestf > s-tools/ > > > > -- /kashyap > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- /kashyap -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Tue May 13 13:26:54 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 13 May 2014 09:26:54 -0400 Subject: [Rdo-list] Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" In-Reply-To: <20140513114247.GA15336@tesla.pnq.redhat.com> References: <20140513114247.GA15336@tesla.pnq.redhat.com> Message-ID: Please be aware of https://bugs.launchpad.net/neutron/+bug/1303998 Samples of neutron.conf && ml2_conf.ini here https://bugs.launchpad.net/neutron/+bug/1303998/comments/2 B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue May 13 14:30:26 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 13 May 2014 20:00:26 +0530 Subject: [Rdo-list] Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" In-Reply-To: References: <20140513114247.GA15336@tesla.pnq.redhat.com> Message-ID: <20140513143026.GB15336@tesla.pnq.redhat.com> On Tue, May 13, 2014 at 09:26:54AM -0400, Boris Derzhavets wrote: > Please be aware of > https://bugs.launchpad.net/neutron/+bug/1303998 Thanks, I just commented in there[1]. I should note that, I didn't make the typo error that Phil noted in the bug. I wonder the issue I see merits a different bug of its own. [1] https://bugs.launchpad.net/neutron/+bug/1303998/comments/6 > Samples of neutron.conf && ml2_conf.ini here > https://bugs.launchpad.net/neutron/+bug/1303998/comments/2 > > B. > -- /kashyap From mrunge at redhat.com Tue May 13 17:55:11 2014 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 13 May 2014 19:55:11 +0200 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ?(3) In-Reply-To: References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com> <5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com> <5370FA4C.4080109@redhat.com> <53710CF5.9090608@redhat.com> Message-ID: <20140513175511.GA29277@turing.berg.ol> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mon, May 12, 2014 at 04:29:43PM -0400, Boris Derzhavets wrote: > https://ask.openstack.org/en/question/28659/failure-login-to-dashboard-on-f20-havana-controller-after-recent-firefox-update-up-to-290-5/ > > This post was downgraded 12 hr ago. I did backport F21 packages > to F20, to keep Horizon alive, nothing else. > I strictly discourage you to run Horizon from Icehouse against a Havana Cloud. There is at least one change requiring an Icehouse keystone and Icehouse keystoneclient as well. Could you please elaborate, what the issue was for you? I can not see a connection with firefox (or older versiosn here at all!) Best, Matthias - -- Matthias Runge -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJTclx/AAoJEOnz8qQwcaIWkHkH/RApLK5bqX+3I0U91+jJjOcw yJlxB+Oun1uvQEXGzFE7dMV1TrCHOvJ28YEzmzLdJmPjAmXnbv+2uX1Xu1FVGuyH LSpLR4D3bSAzgp1it8PtLDHsg9Lf/qG67lyk3vDa7rw58jGCX44iYIDxu84QhV8c TM0yJIoOa6+lX7/9nGuineOpZeCLT7w9BZa2+tabyZlWNFSvEqV+fjJ/vmwEtELX F8vswgEQ265PymZe+FJ5vxHSXqCLZgTKSg+WyH2kC7boLMZv3TkxUrpnXB45Ri8m U9kFjrFeNA7UnLP2MTBvlqUYdF+qGjnqy/SouGO8LrPVyKdv9c+JEIS7QG3+0VE= =nthB -----END PGP SIGNATURE----- From bderzhavets at hotmail.com Wed May 14 02:12:14 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 13 May 2014 22:12:14 -0400 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ?(3) In-Reply-To: <20140513175511.GA29277@turing.berg.ol> References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com>,<5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com>, <5370FA4C.4080109@redhat.com>, , <53710CF5.9090608@redhat.com>, , <20140513175511.GA29277@turing.berg.ol> Message-ID: Just after firefox update to 29.0-5 I lost login to dashboard. Back ported to F20 :- openstack-dashboard.noarch 0:2014.1-1 python-django-horizon.noarch 0:2014.1-1 It fixed a problem. ---> Package openstack-dashboard.noarch 0:2013.2.3-1.fc20 will be updated ---> Package openstack-dashboard.noarch 0:2014.1-1.fc20 will be an update ---> Package openstack-dashboard-theme.noarch 0:2014.1-1.fc20 will be installed ---> Package python-django-horizon.noarch 0:2013.2.3-1.fc20 will be updated ---> Package python-django-horizon.noarch 0:2014.1-1.fc20 will be an update ---> Package python-django-horizon-doc.noarch 0:2014.1-1.fc20 will be installed For about 2 weeks (after upgrade) dashboard console and Havana Clusters work fine. Thanks. B. > Date: Tue, 13 May 2014 19:55:11 +0200 > From: mrunge at redhat.com > To: rdo-list at redhat.com > Subject: Re: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ?(3) > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Mon, May 12, 2014 at 04:29:43PM -0400, Boris Derzhavets wrote: > > https://ask.openstack.org/en/question/28659/failure-login-to-dashboard-on-f20-havana-controller-after-recent-firefox-update-up-to-290-5/ > > > > This post was downgraded 12 hr ago. I did backport F21 packages > > to F20, to keep Horizon alive, nothing else. > > > > I strictly discourage you to run Horizon from Icehouse against a Havana > Cloud. There is at least one change requiring an Icehouse keystone and > Icehouse keystoneclient as well. > > Could you please elaborate, what the issue was for you? I can not see a > connection with firefox (or older versiosn here at all!) > > Best, > Matthias > > - -- > Matthias Runge > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJTclx/AAoJEOnz8qQwcaIWkHkH/RApLK5bqX+3I0U91+jJjOcw > yJlxB+Oun1uvQEXGzFE7dMV1TrCHOvJ28YEzmzLdJmPjAmXnbv+2uX1Xu1FVGuyH > LSpLR4D3bSAzgp1it8PtLDHsg9Lf/qG67lyk3vDa7rw58jGCX44iYIDxu84QhV8c > TM0yJIoOa6+lX7/9nGuineOpZeCLT7w9BZa2+tabyZlWNFSvEqV+fjJ/vmwEtELX > F8vswgEQ265PymZe+FJ5vxHSXqCLZgTKSg+WyH2kC7boLMZv3TkxUrpnXB45Ri8m > U9kFjrFeNA7UnLP2MTBvlqUYdF+qGjnqy/SouGO8LrPVyKdv9c+JEIS7QG3+0VE= > =nthB > -----END PGP SIGNATURE----- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From elias.moreno.tec at gmail.com Wed May 14 02:53:18 2014 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Tue, 13 May 2014 22:23:18 -0430 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: Hey! I documented it here: http://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack Odd thing though, when I was doing the previews all looked ok, upon saving the margins where off :-/ in any case, the doc is there, feel free to add anything I could've missed ;) On Tue, May 13, 2014 at 8:47 AM, St. George, Allan L. < ALLAN.L.ST.GEORGE at leidos.com> wrote: > Great, I?m glad it helped. I wanted my spawn to automatically > join/report to foreman, which is why I included it on my image. > > > > I?m not familiar with RDO docs, but I wouldn?t have any problem with the > document being posted. > > > > V/R, > > > > Allan > > > > *From:* El?as David [mailto:elias.moreno.tec at gmail.com] > *Sent:* Saturday, May 10, 2014 12:59 PM > > *To:* St. George, Allan L. > *Cc:* Kashyap Chamarthy; rdo-list at redhat.com > > *Subject:* Re: [Rdo-list] Automatic resizing of root partitions in RDO > Icehouse > > > > Hey, thanks! this method indeed worked nicely with CentOS 6.5 image in RDO > Icehouse! :D > > > > I didn't do the puppet part since I've no puppet server to test but it > wasn't needed, also I used virt-sparcify instead of step 13 qemu-image > convert > > > > I also tried the oz-install method but it failed everytime with the > following exception: > > > > "raise oz.OzException.OzException("No disk activity in %d seconds, > failing. %s" % (inactivity_timeout, screenshot_text))" > > > > No matter the install type (url or iso) and didn't matter creating this in > different machines with different specs (more ram, cpu, fast disks...) > > > > Anyhow, thank you all for the help and tips! very appreciated ;) > > > > Any chance to include this method in RDO docs? > > > > On Fri, May 9, 2014 at 8:34 AM, St. George, Allan L. < > ALLAN.L.ST.GEORGE at leidos.com> wrote: > > I'm sure someone could make this better, but this is what I've been using > and it works well: > > V/R, > > Allan > > 1. Create disk image with QCOW2 format > > qemu-img create -f qcow2 /tmp/centos-6.5-working.qcow2 10G > > 2. Install CentOS; Install onto a single ext4 partition mounted to ?/? (no > /boot, /swap, etc.) > > virt-install --virt-type {kvm or qemu} --name centos-6.5 --ram 1024 \ > --cdrom=/tmp/CentOS-6.5-x86_64-minimal.iso \ > --disk /tmp/centos-6.5-working.qcow2,format=qcow2 \ > --network network=default \ > --graphics vnc,listen=0.0.0.0 --noautoconsole \ > --os-type=linux --os-variant=rhel6 > > 3. Eject the disk and reboot the virtual machine > > virsh attach-disk --type cdrom --mode readonly centos-6.5 "" hdc > virsh destroy centos-6.5 > virsh start centos-6.5 > > 4. After reboot, login into your new image and modify > '/etc/sysconfig/network-scripts/ifcfg-eth0' to look like this > > DEVICE="eth0" > BOOTPROTO="dhcp" > NM_CONTROLLED="no" > ONBOOT="yes" > TYPE="Ethernet" > > 5. Add EPEL repository and update OS > > rpm -ivh > http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > rpm -ivh > https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm > > 6. Update yum and install cloud-init > > yum -y update > yum install cloud-utils cloud-init parted git > cd /tmp > git clone https://github.com/flegmatik/linux-rootfs-resize.git (installed > in place of cloud-initramfs-tools) > cd linux-rootfs-resize > ./install > > Edit /etc/cloud/cloud.cfg > > Add the line: > > user: ec2-user > Under ?cloud_init_modules?, add: > - resolv-conf > > 7. Install and configure puppet > > yum install puppet > edit /etc/hosts and add entry for foreman > edit /etc/puppet/puppet.conf and add the following lines: > > [main] > pluginsync = true > [agent] > runinterval=1800 > server = {server.domain} > chkconfig puppet on > > 8. Enable the instance to access the metadata service > > echo "NOZEROCONF=yes" >> /etc/sysconfig/network > > 9. Configure /etc/ssh/sshd_config > > Uncomment the following lines: > > PermitRootLogin yes > PasswordAuthentication yes > > 10. Power down your virtual Centos machine > > 11. Clean up the virtual machine of MAC address, etc. > > virt-sysprep -d centos-6.5 > > 12. Undefine the libvirt domain > > virsh undefine centos-6.5 > > 13. Compress QCOW2 image with > > qemu-img convert -c /tmp/centos-6.5-working.qcow2 -O qcow2 > /tmp/centos.qcow2 > > > Image /tmp/centos-6.5.qcow2 is now ready for upload to Openstack > > > > -----Original Message----- > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > Sent: Wednesday, May 07, 2014 11:49 PM > To: St. George, Allan L. > Cc: rdo-list at redhat.com; El?as David > Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO > Icehouse > > On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: > > I haven?t had the time to work with Icehouse yet, but I have outlined > > instruction that are used to create Havana CentOS images that resize > > automatically upon spawning via linux-rootfs-resize. > > > > If interested, I?ll forward it along. > > That'd be useful. It'd be even better if you could make a quick RDO wiki > page[1] that'll be indexed by the search engines. > > > [1] http://openstack.redhat.com/ > > PS: If you're a Markdown user, you can convert Markdown -> WikiMedia (RDO > uses WikiMedia for wiki) trivially like this: > > $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki > > > > > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] > > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: > > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] > > Automatic resizing of root partitions in RDO Icehouse > > > > > > Hi thanks for the answers! > > > > But how is the support right now in OpenStack with centos/fedora > > images regarding the auto resizing during boot? does the disk size set > > in the flavor is respected or not, or does it work only with fedora > > and newer kernels than what CentOS uses...things like that is what I'm > > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" > > > wrote: On Mon, May > > 05, 2014 at 10:22:26PM -0430, El?as David wrote: > > > Hello all, > > > > > > I would like to know what's the current state of auto resizing the > > > root partition in current RDO Icehouse, more specifically, CentOS > > > and Fedora images. > > > > > > I've read many versions of the story so I'm not really sure what > > > works and what doesn't. > > > > > > For instance, I've read that currently, auto resizing of a CentOS > > > 6.5 image for would require the filesystem to be ext3 and I've also > > > read that auto resizing currently works only with kernels >= 3.8, so > > > what's really the deal with this currently? > > > > > > Also, it's as simple as having cloud-init, dracut-modules-growroot > > > and cloud-initramfs-tools installed on the image or are there any > > > other steps required for the auto resizing to work? > > > > > > I personally find[1] virt-resize (which works the same way on any > > images) very useful when I'd like to do resizing, as it works > > consistent well. > > > > I just tried on a Fedora 20 qcow2 cloud image with these below four > > commands and their complete output. > > > > 1. Examine the root filesystem size _inside_ the cloud image: > > > > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 > > > > Name Type VFS Label MBR Size Parent /dev/sda1 > > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - > > - 83 1.9G /dev/sda /dev/sda device - - - > > 2.0G - > > > > 2. Create a new qcow2 disk of 10G: > > > > $ qemu-img create -f qcow2 -o preallocation=metadata \ > > newdisk.qcow2 10G > > > > 3. Perform the resize operation: > > > > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ > > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** > > > > Summary of changes: > > > > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The > > filesystem ext4 on /dev/sda1 will be expanded using the > > 'resize2fs' method. > > > > ********** Setting up initial partition table on newdisk.qcow2 ... > > Copying /dev/sda1 ... 100% > > ? > ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > ? > > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... > > > > Resize operation completed with no errors. Before deleting the > > old disk, carefully check that the resized disk boots and works > > correctly. > > > > 4. Examine the root file system size in the new disk (should reflect > > correctly): > > > > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name > > Type VFS Label MBR Size Parent /dev/sda1 filesystem > > ext4 _/ - 10G - /dev/sda1 partition - - 83 > > 10G /dev/sda /dev/sda device - - - 10G - > > > > > > Hope that helps. > > > > > > [1] > > > > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestf > > s-tools/ > > > > > > > > -- /kashyap > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- > /kashyap > > > > > > -- > > El?as David. > -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shake.chen at gmail.com Wed May 14 03:00:34 2014 From: shake.chen at gmail.com (Shake Chen) Date: Wed, 14 May 2014 11:00:34 +0800 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: Hi now no need git clone https://github.com/flegmatik/linux-rootfs-resize.git you just need install dracut-modules-growroot package, it is work. On Wed, May 14, 2014 at 10:53 AM, El?as David wrote: > Hey! > > I documented it here: > http://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack > > Odd thing though, when I was doing the previews all looked ok, upon saving > the margins where off :-/ in any case, the doc is there, feel free to add > anything I could've missed ;) > > > > On Tue, May 13, 2014 at 8:47 AM, St. George, Allan L. < > ALLAN.L.ST.GEORGE at leidos.com> wrote: > >> Great, I?m glad it helped. I wanted my spawn to automatically >> join/report to foreman, which is why I included it on my image. >> >> >> >> I?m not familiar with RDO docs, but I wouldn?t have any problem with the >> document being posted. >> >> >> >> V/R, >> >> >> >> Allan >> >> >> >> *From:* El?as David [mailto:elias.moreno.tec at gmail.com] >> *Sent:* Saturday, May 10, 2014 12:59 PM >> >> *To:* St. George, Allan L. >> *Cc:* Kashyap Chamarthy; rdo-list at redhat.com >> >> *Subject:* Re: [Rdo-list] Automatic resizing of root partitions in RDO >> Icehouse >> >> >> >> Hey, thanks! this method indeed worked nicely with CentOS 6.5 image in >> RDO Icehouse! :D >> >> >> >> I didn't do the puppet part since I've no puppet server to test but it >> wasn't needed, also I used virt-sparcify instead of step 13 qemu-image >> convert >> >> >> >> I also tried the oz-install method but it failed everytime with the >> following exception: >> >> >> >> "raise oz.OzException.OzException("No disk activity in %d seconds, >> failing. %s" % (inactivity_timeout, screenshot_text))" >> >> >> >> No matter the install type (url or iso) and didn't matter creating this >> in different machines with different specs (more ram, cpu, fast disks...) >> >> >> >> Anyhow, thank you all for the help and tips! very appreciated ;) >> >> >> >> Any chance to include this method in RDO docs? >> >> >> >> On Fri, May 9, 2014 at 8:34 AM, St. George, Allan L. < >> ALLAN.L.ST.GEORGE at leidos.com> wrote: >> >> I'm sure someone could make this better, but this is what I've been using >> and it works well: >> >> V/R, >> >> Allan >> >> 1. Create disk image with QCOW2 format >> >> qemu-img create -f qcow2 /tmp/centos-6.5-working.qcow2 10G >> >> 2. Install CentOS; Install onto a single ext4 partition mounted to ?/? >> (no /boot, /swap, etc.) >> >> virt-install --virt-type {kvm or qemu} --name centos-6.5 --ram 1024 \ >> --cdrom=/tmp/CentOS-6.5-x86_64-minimal.iso \ >> --disk /tmp/centos-6.5-working.qcow2,format=qcow2 \ >> --network network=default \ >> --graphics vnc,listen=0.0.0.0 --noautoconsole \ >> --os-type=linux --os-variant=rhel6 >> >> 3. Eject the disk and reboot the virtual machine >> >> virsh attach-disk --type cdrom --mode readonly centos-6.5 "" hdc >> virsh destroy centos-6.5 >> virsh start centos-6.5 >> >> 4. After reboot, login into your new image and modify >> '/etc/sysconfig/network-scripts/ifcfg-eth0' to look like this >> >> DEVICE="eth0" >> BOOTPROTO="dhcp" >> NM_CONTROLLED="no" >> ONBOOT="yes" >> TYPE="Ethernet" >> >> 5. Add EPEL repository and update OS >> >> rpm -ivh >> http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm >> rpm -ivh >> https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm >> >> 6. Update yum and install cloud-init >> >> yum -y update >> yum install cloud-utils cloud-init parted git >> cd /tmp >> git clone https://github.com/flegmatik/linux-rootfs-resize.git(installed in place of cloud-initramfs-tools) >> cd linux-rootfs-resize >> ./install >> >> Edit /etc/cloud/cloud.cfg >> >> Add the line: >> >> user: ec2-user >> Under ?cloud_init_modules?, add: >> - resolv-conf >> >> 7. Install and configure puppet >> >> yum install puppet >> edit /etc/hosts and add entry for foreman >> edit /etc/puppet/puppet.conf and add the following lines: >> >> [main] >> pluginsync = true >> [agent] >> runinterval=1800 >> server = {server.domain} >> chkconfig puppet on >> >> 8. Enable the instance to access the metadata service >> >> echo "NOZEROCONF=yes" >> /etc/sysconfig/network >> >> 9. Configure /etc/ssh/sshd_config >> >> Uncomment the following lines: >> >> PermitRootLogin yes >> PasswordAuthentication yes >> >> 10. Power down your virtual Centos machine >> >> 11. Clean up the virtual machine of MAC address, etc. >> >> virt-sysprep -d centos-6.5 >> >> 12. Undefine the libvirt domain >> >> virsh undefine centos-6.5 >> >> 13. Compress QCOW2 image with >> >> qemu-img convert -c /tmp/centos-6.5-working.qcow2 -O qcow2 >> /tmp/centos.qcow2 >> >> >> Image /tmp/centos-6.5.qcow2 is now ready for upload to Openstack >> >> >> >> -----Original Message----- >> From: Kashyap Chamarthy [mailto:kchamart at redhat.com] >> Sent: Wednesday, May 07, 2014 11:49 PM >> To: St. George, Allan L. >> Cc: rdo-list at redhat.com; El?as David >> Subject: Re: [Rdo-list] Automatic resizing of root partitions in RDO >> Icehouse >> >> On Wed, May 07, 2014 at 02:31:43PM +0000, St. George, Allan L. wrote: >> > I haven?t had the time to work with Icehouse yet, but I have outlined >> > instruction that are used to create Havana CentOS images that resize >> > automatically upon spawning via linux-rootfs-resize. >> > >> > If interested, I?ll forward it along. >> >> That'd be useful. It'd be even better if you could make a quick RDO wiki >> page[1] that'll be indexed by the search engines. >> >> >> [1] http://openstack.redhat.com/ >> >> PS: If you're a Markdown user, you can convert Markdown -> WikiMedia (RDO >> uses WikiMedia for wiki) trivially like this: >> >> $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki >> >> > >> > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] >> > On Behalf Of El?as David Sent: Tuesday, May 06, 2014 12:57 PM To: >> > Kashyap Chamarthy Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] >> > Automatic resizing of root partitions in RDO Icehouse >> > >> > >> > Hi thanks for the answers! >> > >> > But how is the support right now in OpenStack with centos/fedora >> > images regarding the auto resizing during boot? does the disk size set >> > in the flavor is respected or not, or does it work only with fedora >> > and newer kernels than what CentOS uses...things like that is what I'm >> > looking for On May 6, 2014 4:09 AM, "Kashyap Chamarthy" >> > > wrote: On Mon, May >> > 05, 2014 at 10:22:26PM -0430, El?as David wrote: >> > > Hello all, >> > > >> > > I would like to know what's the current state of auto resizing the >> > > root partition in current RDO Icehouse, more specifically, CentOS >> > > and Fedora images. >> > > >> > > I've read many versions of the story so I'm not really sure what >> > > works and what doesn't. >> > > >> > > For instance, I've read that currently, auto resizing of a CentOS >> > > 6.5 image for would require the filesystem to be ext3 and I've also >> > > read that auto resizing currently works only with kernels >= 3.8, so >> > > what's really the deal with this currently? >> > > >> > > Also, it's as simple as having cloud-init, dracut-modules-growroot >> > > and cloud-initramfs-tools installed on the image or are there any >> > > other steps required for the auto resizing to work? >> > >> > >> > I personally find[1] virt-resize (which works the same way on any >> > images) very useful when I'd like to do resizing, as it works >> > consistent well. >> > >> > I just tried on a Fedora 20 qcow2 cloud image with these below four >> > commands and their complete output. >> > >> > 1. Examine the root filesystem size _inside_ the cloud image: >> > >> > $ virt-filesystems --long --all -h -a fedora-latest.x86_64.qcow2 >> > >> > Name Type VFS Label MBR Size Parent /dev/sda1 >> > filesystem ext4 _/ - 1.9G - /dev/sda1 partition - >> > - 83 1.9G /dev/sda /dev/sda device - - - >> > 2.0G - >> > >> > 2. Create a new qcow2 disk of 10G: >> > >> > $ qemu-img create -f qcow2 -o preallocation=metadata \ >> > newdisk.qcow2 10G >> > >> > 3. Perform the resize operation: >> > >> > $ virt-resize --expand /dev/sda1 fedora-latest.x86_64.qcow2 \ >> > newdisk.qcow2 Examining fedora-latest.x86_64.qcow2 ... ********** >> > >> > Summary of changes: >> > >> > /dev/sda1: This partition will be resized from 1.9G to 10.0G. The >> > filesystem ext4 on /dev/sda1 will be expanded using the >> > 'resize2fs' method. >> > >> > ********** Setting up initial partition table on newdisk.qcow2 ... >> > Copying /dev/sda1 ... 100% >> > ? >> ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? >> ? >> > 00:00 Expanding /dev/sda1 using the 'resize2fs' method ... >> > >> > Resize operation completed with no errors. Before deleting the >> > old disk, carefully check that the resized disk boots and works >> > correctly. >> > >> > 4. Examine the root file system size in the new disk (should reflect >> > correctly): >> > >> > $ virt-filesystems --long --all -h -a newdisk.qcow2 Name >> > Type VFS Label MBR Size Parent /dev/sda1 filesystem >> > ext4 _/ - 10G - /dev/sda1 partition - - 83 >> > 10G /dev/sda /dev/sda device - - - 10G - >> > >> > >> > Hope that helps. >> > >> > >> > [1] >> > >> > http://kashyapc.com/2013/04/13/resize-a-fedora-19-guest-with-libguestf >> > s-tools/ >> > >> > >> > >> > -- /kashyap >> >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> -- >> /kashyap >> >> >> >> >> >> -- >> >> El?as David. >> > > > > -- > El?as David. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 14 03:49:48 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 14 May 2014 09:19:48 +0530 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: <20140514034948.GA6868@tesla.pnq.redhat.com> On Tue, May 13, 2014 at 10:23:18PM -0430, El?as David wrote: > Hey! > > I documented it here: > http://openstack.redhat.com/Creating_CentOS_and_Fedora_images_ready_for_Openstack Thanks. > Odd thing though, when I was doing the previews all looked ok, upon saving > the margins where off :-/ For future reference, I usually write docs in the dead-simple Markdown, and trivially convert it into Mediawiki pages like that: $ pandoc -f markdown -t Mediawiki foo.md -o foo.wiki -- /kashyap From kchamart at redhat.com Wed May 14 05:19:49 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 14 May 2014 10:49:49 +0530 Subject: [Rdo-list] Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" In-Reply-To: <20140513114247.GA15336@tesla.pnq.redhat.com> References: <20140513114247.GA15336@tesla.pnq.redhat.com> Message-ID: <20140514051949.GB6868@tesla.pnq.redhat.com> To those reading the thread, this is the Nova commit[1] that introduced this change: commit 2390857d7ae625dcd18a72b2980f2d862b776623 Author: Dan Smith Date: Wed Feb 19 12:02:45 2014 -0800 Make libvirt wait for neutron to confirm plugging before boot This makes the libvirt driver use the instance event mechanism to wait for neutron to confirm that VIF plugging is complete before actually starting the VM. Here we introduce a new configuration option of vif_plugging_is_fatal which allows us to control whether a failure to hear back from neutron is something that should kill the current operation. In cases where consoles are not provided for the guest, failing is the only reasonable action, but if consoles *are* provided, it may be advantageous to still allow the guest to boot and be accessed out of band. In order to properly reflect the neutron failure in the instance's fault property, this also extends manager to catch and re-raise the VirtualInterfaceCreateException during spawn(). Also, since the oslo.messaging merge, we weren't declaring expected exceptions for run_instance() which this also fixes. DocImpact: This requires a neutron that is aware of these events when using nova with neutron, or a configuration that causes nova not to expect/wait for these events. Related to blueprint admin-event-callback-api [1] https://review.openstack.org/#/c/74832/26 -- /kashyap From anandts124 at gmail.com Wed May 14 10:32:23 2014 From: anandts124 at gmail.com (anand ts) Date: Wed, 14 May 2014 16:02:23 +0530 Subject: [Rdo-list] Cinder volume deleting issue Message-ID: Hi all, I have multinode setup on openstack+havana+rdo on CentOS6.5 Issue- Can't able to delete cinder volume. when try to delete through command line [root at cinder ~(keystone_admin)]# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | fe0fdad1-2f8a-4cce-a173-797391dbc7ad | in-use | vol2 | 10 | None | true | b998107b-e708-42a5-8790-4727fed879a3 | +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------ [root at cinder ~(keystone_admin)]# cinder delete fe0fdad1-2f8a-4cce-a173-797391dbc7ad Delete for volume fe0fdad1-2f8a-4cce-a173-797391dbc7ad failed: Invalid volume: Volume status must be available or error, but current status is: in-use (HTTP 400) (Request-ID: req-d9be63f0-476a-4ecd-8655-20491336ee8b) ERROR: Unable to delete any of the specified volumes. when try to delete through dashboard, screen shot attached with the mail. This occured when a cinder volume attached instance is deleted from the database without detaching the volume. Now the volume is in use and attached to NONE. Please find the cinder logs here , http://paste.openstack.org/show/80333/ Any work around to this problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Untitled.png Type: image/png Size: 17578 bytes Desc: not available URL: From tshefi at redhat.com Wed May 14 13:23:21 2014 From: tshefi at redhat.com (Tzach Shefi) Date: Wed, 14 May 2014 09:23:21 -0400 (EDT) Subject: [Rdo-list] Cinder volume deleting issue In-Reply-To: References: Message-ID: <1233848453.4299267.1400073801853.JavaMail.zimbra@redhat.com> Hey Anand, How are you? Did you try force deleting it? # cinder force-delete fe0fdad1-2f8a-4cce-a173-797391dbc7ad Tzach ----- Original Message ----- From: "anand ts" To: rdo-list at redhat.com Sent: Wednesday, May 14, 2014 1:32:23 PM Subject: [Rdo-list] Cinder volume deleting issue Hi all, I have multinode setup on openstack+havana+rdo on CentOS6.5 Issue- Can't able to delete cinder volume. when try to delete through command line [root at cinder ~(keystone_admin)]# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | fe0fdad1-2f8a-4cce-a173-797391dbc7ad | in-use | vol2 | 10 | None | true | b998107b-e708-42a5-8790-4727fed879a3 | +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------ [root at cinder ~(keystone_admin)]# cinder delete fe0fdad1-2f8a-4cce-a173-797391dbc7ad Delete for volume fe0fdad1-2f8a-4cce-a173-797391dbc7ad failed: Invalid volume: Volume status must be available or error, but current status is: in-use (HTTP 400) (Request-ID: req-d9be63f0-476a-4ecd-8655-20491336ee8b) ERROR: Unable to delete any of the specified volumes. when try to delete through dashboard, screen shot attached with the mail. This occured when a cinder volume attached instance is deleted from the database without detaching the volume. Now the volume is in use and attached to NONE. Please find the cinder logs here , http://paste.openstack.org/show/80333/ Any work around to this problem. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From mrunge at redhat.com Wed May 14 13:54:12 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 14 May 2014 15:54:12 +0200 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ?(3) In-Reply-To: References: <5370C9A1.7080507@163.com> <20140512135023.GA4056@redhat.com> <5370D51E.5010206@163.com> <20140512143734.GC4056@redhat.com> <5370FA4C.4080109@redhat.com> <53710CF5.9090608@redhat.com> <20140513175511.GA29277@turing.berg.ol> Message-ID: <20140514135412.GB23599@turing.berg.ol> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tue, May 13, 2014 at 10:12:14PM -0400, Boris Derzhavets wrote: > Just after firefox update to 29.0-5 I lost login to dashboard. > Back ported to F20 :- > openstack-dashboard.noarch 0:2014.1-1 > python-django-horizon.noarch 0:2014.1-1 > > It fixed a problem. > > ---> Package openstack-dashboard.noarch 0:2013.2.3-1.fc20 will be updated > > ---> Package openstack-dashboard.noarch 0:2014.1-1.fc20 will be an update > > ---> Package openstack-dashboard-theme.noarch 0:2014.1-1.fc20 will be installed > > ---> Package python-django-horizon.noarch 0:2013.2.3-1.fc20 will be updated > > ---> Package python-django-horizon.noarch 0:2014.1-1.fc20 will be an update > > ---> Package python-django-horizon-doc.noarch 0:2014.1-1.fc20 will be installed > > For about 2 weeks (after upgrade) dashboard console and Havana Clusters work fine. > Well, I'd be astonished, if e.g changing your password via web ui would work. To verify your assumption, upgrading Firefox to version 29 breaks horizon you could do: yum downgrade firefox If this is true, you couldn't connect to Cloud environments running Havana at all. - -- Matthias Runge -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJTc3WEAAoJEOnz8qQwcaIWNAIH/1L3VDlLVKyJqteS365HGBmU pBG5Yj8RhGoJWVsp2RExB6+exPRAirubFxY7kLsNP5qK8mMESFQ1zOeNxSBuL54P IpQyPC1k4f1GwEIWC3oJb8KjEgonCnKx2Oy+Sln//Wol0M5/WQBsNUUyeCJfAJYT jZlhDmBcNm2Q/DVmPMD4r8Nm5WsgSh3nZ78rdacY+C//3GcTcMtgB7/OLKXIWXZW ye5thV8Tw3hiLE9dlO4CcsKP7ybREc153taLlSHqdeHRDkJTf7Cii0V5fZG8/8sZ 9r7qvDoaJ3o2QHi6RY9yoXFN+JUIAhuL5GXwHoyHD8YmJ8KGJUsf9bR5wIjhH8M= =U1LW -----END PGP SIGNATURE----- From mrunge at redhat.com Wed May 14 13:57:06 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 14 May 2014 15:57:06 +0200 Subject: [Rdo-list] Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" In-Reply-To: <20140514051949.GB6868@tesla.pnq.redhat.com> References: <20140513114247.GA15336@tesla.pnq.redhat.com> <20140514051949.GB6868@tesla.pnq.redhat.com> Message-ID: <20140514135706.GC23599@turing.berg.ol> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wed, May 14, 2014 at 10:49:49AM +0530, Kashyap Chamarthy wrote: > To those reading the thread, this is the Nova commit[1] that introduced > this change: > /me thinks, this is tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1090605 Matthias - -- Matthias Runge -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJTc3YyAAoJEOnz8qQwcaIWSZEIAJxQg/56kKH/Ki0e8EFOBx1b rdqZNsW03XUw7MWcM0TEy37L9VaIG9GDKTYjQS3P4xhGo0XouRSXLhINvFr9tYlU pPLb9FTelHKVdo7IMsxNYxVo5q5BWnaF5bCreRIGs5samN1JAyAnDwskMEtuNs1z IPWhT3ICm8oP9vhm9jkeEPHlSBhd8vA6B0cH345caTutQN8yuog3vIV1hGCdhRMB AUjClHnvuUQvJztl6tZIO0oy/UT+LAs/jvyZKpuXPn5ixnCVV/68JLIkUXFtgrj+ 8s3JISMuc00YO2qy/2PbwFj1yZzfXRmAfbLNDcKpsl3eG0HEnSObZ8C2i+GRm6k= =/iOJ -----END PGP SIGNATURE----- From pbrady at redhat.com Wed May 14 14:44:39 2014 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Wed, 14 May 2014 15:44:39 +0100 Subject: [Rdo-list] [package announce] openstack-packstack icehouse update Message-ID: <53738157.6080709@redhat.com> Icehouse RDO packstack has been updated as follows. openstack-packstack-2014.1.1-0.12.dev1068: - Ensure all puppet modules dependencies installed on all nodes - [Nova] Setup ssh keys to support ssh-based live migration (lp#1311168) - [Nova] Support multiple sshkeys per host - [Nova] Fix vcenter parameters duplicated in answer file (rhbz#1061372, rhbz#1092008) - [Ceilometer] Install ceilometer compute agent on compute nodes (lp#1318383) - [Ceilometer] Start openstack-ceilometer-notification service (rhbz#1096268) - [Horizon] Fix help_url to point to upstream docs (rhbz#1080917) - [Horizon] Fix invalid keystone_default_role causing swift issues - [Horizon] Improved SSL configuration (rhbz#1078130) - [Neutron] Fix ML2 install (rhbz#1096510) From bderzhavets at hotmail.com Wed May 14 16:21:01 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 14 May 2014 12:21:01 -0400 Subject: [Rdo-list] keystone error when run packstack - What's going to happen with currently running Havana on F20 ?(3) In-Reply-To: <20140514135412.GB23599@turing.berg.ol> References: <5370C9A1.7080507@163.com>, <20140512135023.GA4056@redhat.com>, <5370D51E.5010206@163.com>, <20140512143734.GC4056@redhat.com>, <5370FA4C.4080109@redhat.com>, , <53710CF5.9090608@redhat.com>, , <20140513175511.GA29277@turing.berg.ol>, , <20140514135412.GB23599@turing.berg.ol> Message-ID: I sincerely apologize for not doing ` yum downgrade firefox` on working systems ( even the one at my home). I got problems right after `yum -y update` upgraded firefox. I might be wrong regarding firefox to bring up the problem, it might be something else on update list. I got issue fixed by back porting F21 packages to F20 on several Multi Node Havana F20 systems Controller+Compute Neutron OVS&GRE (been built in general due to Kashyap's posting @fedorapeople.org ) affected by those `yum -y update`. The packages back ported are 1. openstack-dashboard.noarch 0:2014.1-1 2. python-django-horizon.noarch 0:2014.1-1and in mean time Havana && Horizon Dashboard (2014.1) work together with no problems , at least visible for me. I remember your warning about keystone updates, which have not been obviously applied due to my not awareness of them ( actually, I just don't know how keystone should be updated to be safe on Havana ). Thank you very much for your support. B. P.S. I just don't have Havana Cluster with backported packages for experiment > To verify your assumption, upgrading Firefox to version 29 breaks > horizon you could do: > yum downgrade firefox > > If this is true, you couldn't connect to Cloud environments running > Havana at all. > - -- > Matthias Runge > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJTc3WEAAoJEOnz8qQwcaIWNAIH/1L3VDlLVKyJqteS365HGBmU > pBG5Yj8RhGoJWVsp2RExB6+exPRAirubFxY7kLsNP5qK8mMESFQ1zOeNxSBuL54P > IpQyPC1k4f1GwEIWC3oJb8KjEgonCnKx2Oy+Sln//Wol0M5/WQBsNUUyeCJfAJYT > jZlhDmBcNm2Q/DVmPMD4r8Nm5WsgSh3nZ78rdacY+C//3GcTcMtgB7/OLKXIWXZW > ye5thV8Tw3hiLE9dlO4CcsKP7ybREc153taLlSHqdeHRDkJTf7Cii0V5fZG8/8sZ > 9r7qvDoaJ3o2QHi6RY9yoXFN+JUIAhuL5GXwHoyHD8YmJ8KGJUsf9bR5wIjhH8M= > =U1LW > -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dron at redhat.com Wed May 14 16:27:04 2014 From: dron at redhat.com (Dafna Ron) Date: Wed, 14 May 2014 17:27:04 +0100 Subject: [Rdo-list] Cinder volume deleting issue In-Reply-To: <1233848453.4299267.1400073801853.JavaMail.zimbra@redhat.com> References: <1233848453.4299267.1400073801853.JavaMail.zimbra@redhat.com> Message-ID: <53739958.6070103@redhat.com> you can reset the volume status to what you like by running cinder reset-state --state However, if you are sure that the volume is not attached to an instance, can you please tell me what you did so that the volume is reported as attached and yet it's not? can you please attach the nova logs? maybe we can see it there? Dafna On 05/14/2014 02:23 PM, Tzach Shefi wrote: > Hey Anand, > > How are you? > > Did you try force deleting it? > > # cinder force-delete fe0fdad1-2f8a-4cce-a173-797391dbc7ad > > Tzach > > ----- Original Message ----- > From: "anand ts" > To: rdo-list at redhat.com > Sent: Wednesday, May 14, 2014 1:32:23 PM > Subject: [Rdo-list] Cinder volume deleting issue > > Hi all, > > I have multinode setup on openstack+havana+rdo on CentOS6.5 > > Issue- Can't able to delete cinder volume. > > when try to delete through command line > > [root at cinder ~(keystone_admin)]# cinder list > +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ > | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | > +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ > | fe0fdad1-2f8a-4cce-a173-797391dbc7ad | in-use | vol2 | 10 | None | true | b998107b-e708-42a5-8790-4727fed879a3 | > +--------------------------------------+--------+--------------+------+-------------+----------+------------------------------ > > [root at cinder ~(keystone_admin)]# cinder delete fe0fdad1-2f8a-4cce-a173-797391dbc7ad > Delete for volume fe0fdad1-2f8a-4cce-a173-797391dbc7ad failed: Invalid volume: Volume status must be available or error, but current status is: in-use (HTTP 400) (Request-ID: req-d9be63f0-476a-4ecd-8655-20491336ee8b) > ERROR: Unable to delete any of the specified volumes. > > > when try to delete through dashboard, screen shot attached with the mail. > > This occured when a cinder volume attached instance is deleted from the database without detaching the volume. Now the volume is in use and attached to NONE. > > > Please find the cinder logs here , http://paste.openstack.org/show/80333/ > > Any work around to this problem. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Dafna Ron From bderzhavets at hotmail.com Wed May 14 16:37:22 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 14 May 2014 12:37:22 -0400 Subject: [Rdo-list] Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" In-Reply-To: <20140514135706.GC23599@turing.berg.ol> References: <20140513114247.GA15336@tesla.pnq.redhat.com>, <20140514051949.GB6868@tesla.pnq.redhat.com>, <20140514135706.GC23599@turing.berg.ol> Message-ID: Trace backs in https://www.redhat.com/archives/rdo-list/2014-May/msg00060.html and mentioned by you ( https://bugzilla.redhat.com/show_bug.cgi?id=1090605 ) seem to me to be different. Also Kashyap wrote :- Attempt to launch a Nova instance as a user tenant results in this trace back saying "Unexpected vif_type". Interesting thing is, the instance goes into ACTIVE when I launch the Nova instance with admin tenant. I also handle a thread at ask.openstack.org with person on Multi Node Neutron ML2&OVS&GRE Setup on Ubuntu 14.04, experiencing same problem as was described by Kashyap. I just cannot have him to reproduce launching as admin tenant case. B. > /me thinks, this is tracked in > https://bugzilla.redhat.com/show_bug.cgi?id=1090605 > > Matthias > - -- > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed May 14 16:46:29 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 14 May 2014 12:46:29 -0400 Subject: [Rdo-list] Cinder volume deleting issue In-Reply-To: References: Message-ID: You cinder volume is attached to server ( VM) with ID b998107b-e708-42a5-8790-4727fed879a3 . Unless you delete this server, you won't be able to delete attached volume ( it's attached on /dev/vda ) B. Date: Wed, 14 May 2014 16:02:23 +0530 From: anandts124 at gmail.com To: rdo-list at redhat.com Subject: [Rdo-list] Cinder volume deleting issue Hi all, I have multinode setup on openstack+havana+rdo on CentOS6.5 Issue- Can't able to delete cinder volume. when try to delete through command line [root at cinder ~(keystone_admin)]# cinder list+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | fe0fdad1-2f8a-4cce-a173-797391dbc7ad | in-use | vol2 | 10 | None | true | b998107b-e708-42a5-8790-4727fed879a3 |+--------------------------------------+--------+--------------+------+-------------+----------+------------------------------ [root at cinder ~(keystone_admin)]# cinder delete fe0fdad1-2f8a-4cce-a173-797391dbc7adDelete for volume fe0fdad1-2f8a-4cce-a173-797391dbc7ad failed: Invalid volume: Volume status must be available or error, but current status is: in-use (HTTP 400) (Request-ID: req-d9be63f0-476a-4ecd-8655-20491336ee8b) ERROR: Unable to delete any of the specified volumes. when try to delete through dashboard, screen shot attached with the mail. This occured when a cinder volume attached instance is deleted from the database without detaching the volume. Now the volume is in use and attached to NONE. Please find the cinder logs here , http://paste.openstack.org/show/80333/ Any work around to this problem. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 14 17:45:41 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 14 May 2014 23:15:41 +0530 Subject: [Rdo-list] [RESOLVED] Re: Launching a Nova instance results in "NovaException: Unexpected vif_type=binding_failed" In-Reply-To: References: <20140513114247.GA15336@tesla.pnq.redhat.com> <20140514051949.GB6868@tesla.pnq.redhat.com> <20140514135706.GC23599@turing.berg.ol> Message-ID: <20140514174541.GC6868@tesla.pnq.redhat.com> Ater a bit (lot) of debugging (thanks to Attila Fazekas), I was able to boot a Nova instance and get it aquire DHCP lease. A few things I did that have fixed the non-booting of a guest w/ user tenant: - Update ML2 config (specifically, update [agent] and [ovs] sections): ------ $ cat plugins/ml2/ml2_conf.ini | grep -v ^$ | grep -v ^# [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_flat] [ml2_type_vlan] [ml2_type_gre] tunnel_id_ranges = 1:1000 [ml2_type_vxlan] [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [ovs] local_ip = 192.169.142.97 [agent] tunnel_types = gre root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ------ This change brought in the 'br-tun' interface that I was previously missing. - Copy contents of `/etc/neutron/plugins/ml2/ml2_conf.ini` into `/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini`. (Thanks Attila Fazekas for this tip, he mentioned upstream Devstack does this, not sure of the rationale). - Use these config attributes in nova.conf: linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver vif_plugging_is_fatal = False vif_plugging_timeout = 0 - Add "agent_down_time = 75" to [DEFAULT] section of neutron.conf and "report_interval = 5" to [agent] section. Finally, here are my Neutron and Nova configuration details[1] for both Control and Compute nodes for reference. Hope this helps for someone who may hit similar issues. [1] http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt -- /kashyap From bderzhavets at hotmail.com Thu May 15 05:29:12 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 15 May 2014 01:29:12 -0400 Subject: [Rdo-list] What is a correct RDO repository to setup AIO Havana on F20 ? In-Reply-To: <53738157.6080709@redhat.com> References: <53738157.6080709@redhat.com> Message-ID: I need this setup test http://openstack.redhat.com/Upgrading_RDO_To_Icehouse Thanks. B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Thu May 15 05:51:54 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 15 May 2014 11:21:54 +0530 Subject: [Rdo-list] What is a correct RDO repository to setup AIO Havana on F20 ? In-Reply-To: References: <53738157.6080709@redhat.com> Message-ID: <20140515055154.GE6868@tesla.pnq.redhat.com> On Thu, May 15, 2014 at 01:29:12AM -0400, Boris Derzhavets wrote: [Please start a different thread if it's a separate topic.] > I need this setup test > http://openstack.redhat.com/Upgrading_RDO_To_Icehouse Havana packages should be consumed from official Fedora repositories. For IceHouse, here's the repository for Fedora-20[1] Alternatively, to get IceHouse packages, I fetched packages from Fedora Rawhide (but be prepared to handle any conflicts that may arise): $ yum install fedora-release-rawhide -y $ yum update openstack-* --enablerepo=rawhide NOTE: IceHouse packages will be in Fedora 21's (once it is released) official repositories. [1] http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/fedora-20/ -- /kashyap From bderzhavets at hotmail.com Thu May 15 06:48:57 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 15 May 2014 02:48:57 -0400 Subject: [Rdo-list] What is a correct RDO repository to setup AIO Havana on F20 ? (2) In-Reply-To: <20140515055154.GE6868@tesla.pnq.redhat.com> References: <53738157.6080709@redhat.com>, , <20140515055154.GE6868@tesla.pnq.redhat.com> Message-ID: It sounds good, but how to manage on currently running Havana Multi Node Neutron OVS&GRE System on F20? I have already submitted to this list problems with http://openstack.redhat.com/Upgrading_RDO_To_Icehouse I got negative results attempting this upgrade on CentOS 6.5 been posted at https://ask.openstack.org/en/question/29460/attempt-to-upgrade-aio-havana-to-icehouse-on-centos-65-vm-on-libvirts-subnet/ That's why I am concerned about ability to test http://openstack.redhat.com/Upgrading_RDO_To_Icehouse before applying Upgrade Instructions to Real System. Thanks. B. > Havana packages should be consumed from official Fedora repositories. > > For IceHouse, here's the repository for Fedora-20[1] > > Alternatively, to get IceHouse packages, I fetched packages from Fedora > Rawhide (but be prepared to handle any conflicts that may arise): > > $ yum install fedora-release-rawhide -y > $ yum update openstack-* --enablerepo=rawhide > > NOTE: IceHouse packages will be in Fedora 21's (once it is released) > official repositories. > > > [1] > http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/fedora-20/ > > -- > /kashyap -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Thu May 15 09:54:19 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 15 May 2014 15:24:19 +0530 Subject: [Rdo-list] What is a correct RDO repository to setup AIO Havana on F20 ? (2) In-Reply-To: References: <53738157.6080709@redhat.com> <20140515055154.GE6868@tesla.pnq.redhat.com> Message-ID: <20140515095419.GF6868@tesla.pnq.redhat.com> On Thu, May 15, 2014 at 02:48:57AM -0400, Boris Derzhavets wrote: > It sounds good, but how to manage on currently running Havana Multi Node Sorry, I don't follow the above "how to manage", please rephrase. If you want access to packages I outlined the repos below. > Neutron OVS&GRE System on F20? I have already submitted to this list > problems with http://openstack.redhat.com/Upgrading_RDO_To_Icehouse > I got negative results attempting this upgrade on CentOS 6.5 been posted > at > https://ask.openstack.org/en/question/29460/attempt-to-upgrade-aio-havana-to-icehouse-on-centos-65-vm-on-libvirts-subnet/ > > That's why I am concerned about ability to test > http://openstack.redhat.com/Upgrading_RDO_To_Icehouse > before applying Upgrade Instructions to Real System. If you have snapshotting abilities, I'd suggest you take a snapshot of both your nodes, perform the upgrade. If something goes catastrophically wrong, roll back. If you're using QCOW2 images (and a reasonably new QEMU/libvirt on host), simplest is to use the QEMU internal named snapshots (these can be live/offline, but offline is more robust): $ virsh snapshot-create-as el6vm snap1 "Havana setup" $ virsh snapshot-revert el6vm snap1 > > > Havana packages should be consumed from official Fedora repositories. > > > > For IceHouse, here's the repository for Fedora-20[1] > > > > Alternatively, to get IceHouse packages, I fetched packages from Fedora > > Rawhide (but be prepared to handle any conflicts that may arise): > > > > $ yum install fedora-release-rawhide -y > > $ yum update openstack-* --enablerepo=rawhide > > > > NOTE: IceHouse packages will be in Fedora 21's (once it is released) > > official repositories. > > > > > > [1] > > http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/fedora-20/ > > > > -- > > /kashyap > -- /kashyap From kchamart at redhat.com Thu May 15 10:01:01 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 15 May 2014 15:31:01 +0530 Subject: [Rdo-list] What is a correct RDO repository to setup AIO Havana on F20 ? (2) In-Reply-To: <20140515095419.GF6868@tesla.pnq.redhat.com> References: <53738157.6080709@redhat.com> <20140515055154.GE6868@tesla.pnq.redhat.com> <20140515095419.GF6868@tesla.pnq.redhat.com> Message-ID: <20140515100101.GG6868@tesla.pnq.redhat.com> On Thu, May 15, 2014 at 03:24:19PM +0530, Kashyap Chamarthy wrote: > On Thu, May 15, 2014 at 02:48:57AM -0400, Boris Derzhavets wrote: > > It sounds good, but how to manage on currently running Havana Multi > > Node If you meant will it work, then only one way to figure is to try and find out. I posted some configs here[1] from a fresh IceHouse setup, you might find something useful there. [1] http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt -- /kashyap From PATRICK.D.DEVINE at leidos.com Thu May 15 13:54:56 2014 From: PATRICK.D.DEVINE at leidos.com (Devine, Patrick D.) Date: Thu, 15 May 2014 13:54:56 +0000 Subject: [Rdo-list] LDAP configuration Message-ID: <5374C72F.8090204@leidos.com> All, I have deployed the Havana version of Openstack via Foreman. However now I want to switch Keystone to utilize my LDAP server for authentication vs MySQL. I have followed the instructions for configuring the keystone.conf to point at my server but I haven't seen any documentation on how the LDAP should be populated. For example do I have to re-create all the user accounts for each openstack module? I get that I need to have a people, role, and project set up but there is nothing about what users are needed, how they relate to the project and roles. Has anyone got their Openstack working with LDAP and if so what does you ldap look like? Thanks -- Patrick Devine | Leidos Software Integration Engineer | Command and Intelligence Support Operation mobile: 443-562-0668 | office: 443-574-4266 | email: Patrick.D.Devine at Leidos.com Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dron at redhat.com Thu May 15 14:06:57 2014 From: dron at redhat.com (Dafna Ron) Date: Thu, 15 May 2014 15:06:57 +0100 Subject: [Rdo-list] LDAP configuration In-Reply-To: <5374C72F.8090204@leidos.com> References: <5374C72F.8090204@leidos.com> Message-ID: <5374CA01.3050907@redhat.com> Adding Giulio to that. On 05/15/2014 02:54 PM, Devine, Patrick D. wrote: > All, > > I have deployed the Havana version of Openstack via Foreman. However > now I want to switch Keystone to utilize my LDAP server for > authentication vs MySQL. I have followed the instructions for > configuring the keystone.conf to point at my server but I haven't seen > any documentation on how the LDAP should be populated. For example do > I have to re-create all the user accounts for each openstack module? I > get that I need to have a people, role, and project set up but there > is nothing about what users are needed, how they relate to the project > and roles. > > Has anyone got their Openstack working with LDAP and if so what does > you ldap look like? > > Thanks > -- > Patrick Devine | Leidos > > Software Integration Engineer | Command and Intelligence Support Operation > > mobile: 443-562-0668 | office: 443-574-4266 | email:Patrick.D.Devine at Leidos.com > > Please consider the environment before printing this email. > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- Dafna Ron From weiler at soe.ucsc.edu Thu May 15 16:02:56 2014 From: weiler at soe.ucsc.edu (Erich Weiler) Date: Thu, 15 May 2014 09:02:56 -0700 Subject: [Rdo-list] LDAP configuration In-Reply-To: <5374C72F.8090204@leidos.com> References: <5374C72F.8090204@leidos.com> Message-ID: <9E554D68-AD02-4082-A3D2-E873B8F1379C@soe.ucsc.edu> I second this request - I'm also extremely interested in plugging keystone into an existing LDAP DIT. I was hoping that I could use pre-existing accounts in LDAP and maybe just add some attributes or something along those lines for roles, tenants, etc... Is that how it works? > On May 15, 2014, at 6:54 AM, "Devine, Patrick D." wrote: > > All, > > I have deployed the Havana version of Openstack via Foreman. However now I want to switch Keystone to utilize my LDAP server for authentication vs MySQL. I have followed the instructions for configuring the keystone.conf to point at my server but I haven't seen any documentation on how the LDAP should be populated. For example do I have to re-create all the user accounts for each openstack module? I get that I need to have a people, role, and project set up but there is nothing about what users are needed, how they relate to the project and roles. > > Has anyone got their Openstack working with LDAP and if so what does you ldap look like? > > Thanks > -- > Patrick Devine | Leidos > > Software Integration Engineer | Command and Intelligence Support Operation > > mobile: 443-562-0668 | office: 443-574-4266 | email: Patrick.D.Devine at Leidos.com > > Please consider the environment before printing this email. > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.fyfe1 at gmail.com Thu May 15 16:03:13 2014 From: adam.fyfe1 at gmail.com (Adam Fyfe) Date: Thu, 15 May 2014 17:03:13 +0100 Subject: [Rdo-list] RDO Nova error - cannot launch an instance Message-ID: Hi List I cannot launch an instance. get this in compute.log: 2014-05-15 17:02:09.225 4209 TRACE nova.compute.manager [instance: 31f99d54-1da0-4847-b8f0-d38fe1617ef9] libvirtError: Hook script execution failed: Hook script /etc/libvirt/hooks/qemu qemu failed with error code 256 Any help would be super! thanks adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfv at eurotux.com Thu May 15 17:05:36 2014 From: dfv at eurotux.com (Diogo Vieira) Date: Thu, 15 May 2014 18:05:36 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: <53692B7A.1010903@redhat.com> References: <5368407D.5020907@redhat.com> <5368B43D.2080903@redhat.com> <53692B7A.1010903@redhat.com> Message-ID: On May 6, 2014, at 7:35 PM, P?draig Brady wrote: > On 05/06/2014 06:47 PM, Diogo Vieira wrote: >> On May 6, 2014, at 11:06 AM, P?draig Brady wrote: >> >>> On 05/06/2014 09:47 AM, Diogo Vieira wrote: >>>> On May 6, 2014, at 2:53 AM, P?draig Brady > wrote: >>>> >>>>> What version of swift are you using? >>>>> >>>>> swift-1.13.1.rc2 could have permissions errors, >>>>> while we included a patch in the RDO icehouse swift-1.13.1 release to fix >>>>> http://pad.lv/1302700 which on first glance could be related? >>>>> >>>>> thanks, >>>>> P?draig. >>>> >>>> I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: >>>> >>>> # yum info openstack-swift >>>> Loaded plugins: priorities >>>> 196 packages excluded due to repository priority protections >>>> Installed Packages >>>> Name : openstack-swift >>>> Arch : noarch >>>> Version : 1.13.1 >>>> Release : 1.fc21 >>>> >>>> >>>> So the fix should already be present right? >>> >>> Yes, must be something else so. >>> >>> thanks, >>> P?draig. >> >> That's unfortunate then. One thing's for sure: these errors aren't supposed to happen right? >> >> If someone else has any idea of what could be the problem I would greatly appreciate since this is a recurring problem (even between different Openstack and Packstack versions, since it was tested in Havana and Icehouse). >> >> Thank you very much, >> Diogo Vieira >> > > Ah you see this in Havana repos also, that's NB info. > > Pete any ideas? > > thanks, > P?draig. Yes, the issue happened with an older Havana installation and with the new Icehouse version. Sorry to bring this up again, but I'm really lost and have no idea what could the problem be. Nobody has an idea for trying to resolve the issue or issues (since I don't know if they're related)? Should I file a bug report? Thank you, Diogo Vieira Programador Eurotux Inform?tica, S.A. | www.eurotux.com (t) +351 253 680 300 From bderzhavets at hotmail.com Thu May 15 17:37:16 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 15 May 2014 13:37:16 -0400 Subject: [Rdo-list] What is a correct RDO repository to setup AIO Havana on F20 ? (3) In-Reply-To: <20140515095419.GF6868@tesla.pnq.redhat.com> References: <53738157.6080709@redhat.com>, , <20140515055154.GE6868@tesla.pnq.redhat.com>, , <20140515095419.GF6868@tesla.pnq.redhat.com> Message-ID: > Sorry, I don't follow the above "how to manage", please rephrase. If you > want access to packages I outlined the repos below. Breaking Havana Support on F20, during F20 life cycle, presumes 100% percent safe upgrade of any kind Multi Node Havana Neutron OVS&GRE(VLAN) system to similar system driven already by IceHouse (2014.1). However, my attempt to apply http://openstack.redhat.com/Upgrading_RDO_To_Icehouse to AIO Havana Host on CentOS 6.5 gave extremely unexpected results:- 1. Nova was upgraded OK. 2. But, no one of Neutron packages has been upgraded Finally, I got broken system details here :- https://ask.openstack.org/en/question/29460/attempt-to-upgrade-aio-havana-to-icehouse-on-centos-65-vm-on-libvirts-subnet/ That's the reason , why I want acceptable Havana repos for F20 (or even for CentOS 6.5) to test again simple AIO Havana upgrade to IceHouse on F20 (or even on CentOS 6.5 ) , before crashing the real system. Making snapshots and rolling system back is a good suggestion , but not resolving my problem. Thank you. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri May 16 05:28:58 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 16 May 2014 10:58:58 +0530 Subject: [Rdo-list] RDO Nova error - cannot launch an instance In-Reply-To: References: Message-ID: <20140516052858.GI6868@tesla.pnq.redhat.com> On Thu, May 15, 2014 at 05:03:13PM +0100, Adam Fyfe wrote: > Hi List > > I cannot launch an instance. > > get this in compute.log: > > 2014-05-15 17:02:09.225 4209 TRACE nova.compute.manager [instance: > 31f99d54-1da0-4847-b8f0-d38fe1617ef9] libvirtError: Hook script execution > failed: Hook script /etc/libvirt/hooks/qemu qemu failed with error code 256 You seem to be using a libvirt hook script[1] which is not part of default OpenStack installations. Did you ensure that your that script is invoked correctly? (As that is executed everytime QEMU starts/stops a Nova guest. To diagnose this further, a few questions: - What host OS? - What OpenStack version (openstack-nova specifically) - Versions of libvirt, qemu on your host - Your (sanitized) nova.conf from Compute node - What hypervisor combination? I assume libvirt+QEMU+KVM. - Any other configuration alterations you did that'll occur during boot time of a guest Additionally, you can enable libvirt debug logs by setting these in your /etc/libvirt/libvirtd.conf: log_level = 1 log_outputs = 1:file:/var/tmp/libvirtd.log Restart libvirtd, openstack-nova-compute, and start your guest (Don't forget to turn off the verbose logging once your debugging is done.) [1] http://libvirt.org/hooks.html -- /kashyap From kchamart at redhat.com Fri May 16 06:13:39 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 16 May 2014 11:43:39 +0530 Subject: [Rdo-list] LDAP configuration In-Reply-To: <9E554D68-AD02-4082-A3D2-E873B8F1379C@soe.ucsc.edu> References: <5374C72F.8090204@leidos.com> <9E554D68-AD02-4082-A3D2-E873B8F1379C@soe.ucsc.edu> Message-ID: <20140516061010.GJ6868@tesla.pnq.redhat.com> [Adding Adam Young and Robert Crittenden, as they may have some suggestions.] On Thu, May 15, 2014 at 09:02:56AM -0700, Erich Weiler wrote: > I second this request - I'm also extremely interested in plugging > keystone into an existing LDAP DIT. I was hoping that I could use > pre-existing accounts in LDAP and maybe just add some attributes or > something along those lines for roles, tenants, etc... > > Is that how it works? I haven't tried LDAP w/ Keystone yet, but here are some references that might come in handy: - Configuring Keystone for LDAP backend[1] - LDAP configuration notes for Keystone from Grizzly release[2][3] - Keystone integration w/ FreeIPA project where Tenants, and Roles are managed by Keystone [1] http://docs.openstack.org/admin-guide-cloud/content/configuring-keystone-for-ldap-backend.html [2] http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html [3] http://docs.openstack.org/grizzly/openstack-compute/admin/content/reference-for-ldap-config-options.html [4] http://openstack.redhat.com/Keystone_integration_with_IDM > > > On May 15, 2014, at 6:54 AM, "Devine, Patrick D." > > wrote: > > > > All, > > > > I have deployed the Havana version of Openstack via Foreman. However > > now I want to switch Keystone to utilize my LDAP server for > > authentication vs MySQL. I have followed the instructions for > > configuring the keystone.conf to point at my server but I haven't > > seen any documentation on how the LDAP should be populated. For > > example do I have to re-create all the user accounts for each > > openstack module? I get that I need to have a people, role, and > > project set up but there is nothing about what users are needed, how > > they relate to the project and roles. > > > > Has anyone got their Openstack working with LDAP and if so what does > > you ldap look like? > > -- /kashyap From dfv at eurotux.com Fri May 16 11:30:02 2014 From: dfv at eurotux.com (Diogo Vieira) Date: Fri, 16 May 2014 12:30:02 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: References: <5368407D.5020907@redhat.com> <5368B43D.2080903@redhat.com> <53692B7A.1010903@redhat.com> Message-ID: <575D8AA9-B5E7-4F72-8FE6-BFF91081ADF2@eurotux.com> On May 15, 2014, at 6:05 PM, Diogo Vieira wrote: > On May 6, 2014, at 7:35 PM, P?draig Brady wrote: > >> On 05/06/2014 06:47 PM, Diogo Vieira wrote: >>> On May 6, 2014, at 11:06 AM, P?draig Brady wrote: >>> >>>> On 05/06/2014 09:47 AM, Diogo Vieira wrote: >>>>> On May 6, 2014, at 2:53 AM, P?draig Brady > wrote: >>>>> >>>>>> What version of swift are you using? >>>>>> >>>>>> swift-1.13.1.rc2 could have permissions errors, >>>>>> while we included a patch in the RDO icehouse swift-1.13.1 release to fix >>>>>> http://pad.lv/1302700 which on first glance could be related? >>>>>> >>>>>> thanks, >>>>>> P?draig. >>>>> >>>>> I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: >>>>> >>>>> # yum info openstack-swift >>>>> Loaded plugins: priorities >>>>> 196 packages excluded due to repository priority protections >>>>> Installed Packages >>>>> Name : openstack-swift >>>>> Arch : noarch >>>>> Version : 1.13.1 >>>>> Release : 1.fc21 >>>>> >>>>> >>>>> So the fix should already be present right? >>>> >>>> Yes, must be something else so. >>>> >>>> thanks, >>>> P?draig. >>> >>> That's unfortunate then. One thing's for sure: these errors aren't supposed to happen right? >>> >>> If someone else has any idea of what could be the problem I would greatly appreciate since this is a recurring problem (even between different Openstack and Packstack versions, since it was tested in Havana and Icehouse). >>> >>> Thank you very much, >>> Diogo Vieira >>> >> >> Ah you see this in Havana repos also, that's NB info. >> >> Pete any ideas? >> >> thanks, >> P?draig. > > Yes, the issue happened with an older Havana installation and with the new Icehouse version. > > Sorry to bring this up again, but I'm really lost and have no idea what could the problem be. > > Nobody has an idea for trying to resolve the issue or issues (since I don't know if they're related)? Should I file a bug report? > > Thank you, > > Diogo Vieira > Programador > Eurotux Inform?tica, S.A. | www.eurotux.com > (t) +351 253 680 300 Hello again, I'm sorry for answering my own email but I found something I believe to be the problem. After some time trying to see what the problem was I ended up searching for what could cause the permission problems. This was one of the errors I found in the syslog: May 16 10:07:00 host-10-10-6-30 object-auditor: ERROR Trying to audit /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 173, in failsafe_object_audit#012 self.object_audit(location)#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 191, in object_audit#012 with df.open():#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1029, in open#012 data_file, meta_file)#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1247, in _construct_from_data_file#012 fp = open(data_file, 'rb')#012IOError: [Errno 13] Permission denied: '/srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data' So the problems was in '/srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data'. I tried the following set of commands after I made sure that the swift-object-auditor was run by the swift user (using 'ps faxu'): # su - swift -bash-4.2$ ls -lha /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a ls: cannot access /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/.: Permission denied ls: cannot access /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/..: Permission denied ls: cannot access /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data: Permission denied total 0 d????????? ? ? ? ? ? . d????????? ? ? ? ? ? .. ?????????? ? ? ? ? ? 1399288004.47239.data -bash-4.2$ ls -lha /srv/node/device3/objects/134106/f2a/ total 0 drwxr--r-- 3 swift swift 45 May 5 11:06 . drwxr-xr-x 3 swift swift 45 May 16 10:57 .. drw-r--r-- 2 swift swift 34 May 5 11:06 82f6a3461bb69f80918a1a508a8bdf2a So you see '82f6a3461bb69f80918a1a508a8bdf2a' doesn't have execute permissions and I believe that is the problem. My question now is, could this be caused by some misconfiguration on my part or is it a bug in swift, since it was not me that created the folder? Thank you once again, Diogo Vieira Programador Eurotux Inform?tica, S.A. | www.eurotux.com (t) +351 253 680 300 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Fri May 16 13:21:15 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Fri, 16 May 2014 14:21:15 +0100 Subject: [Rdo-list] Permission denied errors after installing a Storage Node in a Swift Cluster In-Reply-To: <575D8AA9-B5E7-4F72-8FE6-BFF91081ADF2@eurotux.com> References: <5368407D.5020907@redhat.com> <5368B43D.2080903@redhat.com> <53692B7A.1010903@redhat.com> <575D8AA9-B5E7-4F72-8FE6-BFF91081ADF2@eurotux.com> Message-ID: <537610CB.8090001@redhat.com> On 05/16/2014 12:30 PM, Diogo Vieira wrote: > On May 15, 2014, at 6:05 PM, Diogo Vieira > wrote: > >> On May 6, 2014, at 7:35 PM, P?draig Brady > wrote: >> >>> On 05/06/2014 06:47 PM, Diogo Vieira wrote: >>>> On May 6, 2014, at 11:06 AM, P?draig Brady > wrote: >>>> >>>>> On 05/06/2014 09:47 AM, Diogo Vieira wrote: >>>>>> On May 6, 2014, at 2:53 AM, P?draig Brady > wrote: >>>>>> >>>>>>> What version of swift are you using? >>>>>>> >>>>>>> swift-1.13.1.rc2 could have permissions errors, >>>>>>> while we included a patch in the RDO icehouse swift-1.13.1 release to fix >>>>>>> http://pad.lv/1302700 which on first glance could be related? >>>>>>> >>>>>>> thanks, >>>>>>> P?draig. >>>>>> >>>>>> I'm using 1.13.1-1.fc21 (I'm using Fedora) as you can see: >>>>>> >>>>>> # yum info openstack-swift >>>>>> Loaded plugins: priorities >>>>>> 196 packages excluded due to repository priority protections >>>>>> Installed Packages >>>>>> Name : openstack-swift >>>>>> Arch : noarch >>>>>> Version : 1.13.1 >>>>>> Release : 1.fc21 >>>>>> >>>>>> >>>>>> So the fix should already be present right? >>>>> >>>>> Yes, must be something else so. >>>>> >>>>> thanks, >>>>> P?draig. >>>> >>>> That's unfortunate then. One thing's for sure: these errors aren't supposed to happen right? >>>> >>>> If someone else has any idea of what could be the problem I would greatly appreciate since this is a recurring problem (even between different Openstack and Packstack versions, since it was tested in Havana and Icehouse). >>>> >>>> Thank you very much, >>>> Diogo Vieira >>>> >>> >>> Ah you see this in Havana repos also, that's NB info. >>> >>> Pete any ideas? >>> >>> thanks, >>> P?draig. >> >> Yes, the issue happened with an older Havana installation and with the new Icehouse version. >> >> Sorry to bring this up again, but I'm really lost and have no idea what could the problem be. >> >> Nobody has an idea for trying to resolve the issue or issues (since I don't know if they're related)? Should I file a bug report? >> >> Thank you, >> >> Diogo Vieira > >> Programador >> Eurotux Inform?tica, S.A. | www.eurotux.com >> (t) +351 253 680 300 > > Hello again, > > I'm sorry for answering my own email but I found something I believe to be the problem. After some time trying to see what the problem was I ended up searching for what could cause the permission problems. This was one of the errors I found in the syslog: > > May 16 10:07:00 host-10-10-6-30 object-auditor: ERROR Trying to audit /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 173, in failsafe_object_audit#012 self.object_audit(location)#012 File "/usr/lib/python2.7/site-packages/swift/obj/auditor.py", line 191, in object_audit#012 with df.open():#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1029, in open#012 data_file, meta_file)#012 File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 1247, in _construct_from_data_file#012 fp = open(data_file, 'rb')#012IOError: [Errno 13] Permission denied: '/srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data' > > > So the problems was in '/srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data'. I tried the following set of commands after I made sure that the swift-object-auditor was run by the swift user (using 'ps faxu'): > > # su - swift > > -bash-4.2$ ls -lha /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a > ls: cannot access /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/.: Permission denied > ls: cannot access /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/..: Permission denied > ls: cannot access /srv/node/device3/objects/134106/f2a/82f6a3461bb69f80918a1a508a8bdf2a/1399288004.47239.data: Permission denied > total 0 > d????????? ? ? ? ? ? . > d????????? ? ? ? ? ? .. > ?????????? ? ? ? ? ? 1399288004.47239.data > > -bash-4.2$ ls -lha /srv/node/device3/objects/134106/f2a/ > total 0 > drwxr--r-- 3 swift swift 45 May 5 11:06 . > drwxr-xr-x 3 swift swift 45 May 16 10:57 .. > drw-r--r-- 2 swift swift 34 May 5 11:06 82f6a3461bb69f80918a1a508a8bdf2a > > > So you see '82f6a3461bb69f80918a1a508a8bdf2a' doesn't have execute permissions and I believe that is the problem. > > My question now is, could this be caused by some misconfiguration on my part or is it a bug in swift, since it was not me that created the folder? I'm not sure. You might have changed after inadvertently with chmod, or beforehand with umask. The problematic drw-r--r-- permission above is consistent with a umask of 0133 (unlikely) or a chmod of 0644. Now there was a recent patch to chmod(0644): http://pad.lv/1302700 Though I don't think that's the cause as it should only be for the gz _files_ Not much help. sorry, P?draig. From bderzhavets at hotmail.com Sat May 17 02:56:48 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 16 May 2014 22:56:48 -0400 Subject: [Rdo-list] Failure to run yum update on F20 due to openstack-neutron-2013.2.3-2 dependency problem Message-ID: # yum update ---> Package python-neutron.noarch 0:2013.2.3-2.fc20 will be an update --> Processing Dependency: python-neutronclient >= 2.3.4 for package: python-neutron-2013.2.3-2.fc20.noarch --> Finished Dependency Resolution Error: Package: python-neutron-2013.2.3-2.fc20.noarch (updates) Requires: python-neutronclient >= 2.3.4 Installed: python-neutronclient-2.3.1-3.fc20.noarch (@updates) python-neutronclient = 2.3.1-3.fc20 Available: python-neutronclient-2.3.1-2.fc20.noarch (fedora) python-neutronclient = 2.3.1-2.fc20 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest # yum update --skip-broken Packages skipped because of dependency problems: openstack-neutron-2013.2.3-2.fc20.noarch from updates openstack-neutron-openvswitch-2013.2.3-2.fc20.noarch from updates python-eventlet-0.14.0-1.fc20.noarch from updates python-greenlet-0.4.2-1.fc20.x86_64 from updates python-neutron-2013.2.3-2.fc20.noarch from updates Thanks. B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat May 17 05:21:19 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 17 May 2014 01:21:19 -0400 Subject: [Rdo-list] How to follow RDO instructions Adding a compute node ? Message-ID: Question here :- https://ask.openstack.org/en/question/29886/how-to-follow-rdo-instruction-adding-a-compute-node/ Thanks. B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sat May 17 05:41:19 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sat, 17 May 2014 07:41:19 +0200 Subject: [Rdo-list] How to follow RDO instructions Adding a compute node ? In-Reply-To: References: Message-ID: Hi, This was answered here: https://ask.openstack.org/en/question/12079/extending-rdo-installation-with-additional-compute-nodes/ HTH, -Arash On Sat, May 17, 2014 at 7:21 AM, Boris Derzhavets wrote: > Question here :- > > https://ask.openstack.org/en/question/29886/how-to-follow-rdo-instruction-adding-a-compute-node/ > > Thanks. > B. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat May 17 06:29:02 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 17 May 2014 02:29:02 -0400 Subject: [Rdo-list] How to follow RDO instructions Adding a compute node ?(2) In-Reply-To: References: , Message-ID: I have just one answer-file which was generated by `packstack --allinone`, you had intend to use answer-file already tuned for Neutron OVS&VLAN setup. Thanks. B. Date: Sat, 17 May 2014 07:41:19 +0200 Subject: Re: [Rdo-list] How to follow RDO instructions Adding a compute node ? From: ak at cloudssky.com To: bderzhavets at hotmail.com CC: rdo-list at redhat.com Hi, This was answered here:https://ask.openstack.org/en/question/12079/extending-rdo-installation-with-additional-compute-nodes/ HTH,-Arash On Sat, May 17, 2014 at 7:21 AM, Boris Derzhavets wrote: Question here :- https://ask.openstack.org/en/question/29886/how-to-follow-rdo-instruction-adding-a-compute-node/ Thanks. B. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Sat May 17 12:21:36 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Sat, 17 May 2014 13:21:36 +0100 Subject: [Rdo-list] Failure to run yum update on F20 due to openstack-neutron-2013.2.3-2 dependency problem In-Reply-To: References: Message-ID: <53775450.3030703@redhat.com> On 05/17/2014 03:56 AM, Boris Derzhavets wrote: > # yum update > > ---> Package python-neutron.noarch 0:2013.2.3-2.fc20 will be an update > --> Processing Dependency: python-neutronclient >= 2.3.4 for package: python-neutron-2013.2.3-2.fc20.noarch > --> Finished Dependency Resolution > Error: Package: python-neutron-2013.2.3-2.fc20.noarch (updates) > Requires: python-neutronclient >= 2.3.4 > Installed: python-neutronclient-2.3.1-3.fc20.noarch (@updates) > python-neutronclient = 2.3.1-3.fc20 > Available: python-neutronclient-2.3.1-2.fc20.noarch (fedora) > python-neutronclient = 2.3.1-2.fc20 > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest > > # yum update --skip-broken > > Packages skipped because of dependency problems: > openstack-neutron-2013.2.3-2.fc20.noarch from updates > openstack-neutron-openvswitch-2013.2.3-2.fc20.noarch from updates > python-eventlet-0.14.0-1.fc20.noarch from updates > python-greenlet-0.4.2-1.fc20.x86_64 from updates > python-neutron-2013.2.3-2.fc20.noarch from updates > > Thanks. > B. The new neutronclient is needed for http://pad.lv/1280941 apparently. For now please install manually like: rpm -Uvh http://kojipkgs.fedoraproject.org/packages/python-neutronclient/2.3.4/1.fc20/noarch/python-neutronclient-2.3.4-1.fc20.noarch.rpm I've submitted the update now: https://admin.fedoraproject.org/updates/python-neutronclient-2.3.4-1.fc20 Please provide feedback on that to expediate the propagation. sorry for trouble. P?draig. From bderzhavets at hotmail.com Sat May 17 15:05:39 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 17 May 2014 11:05:39 -0400 Subject: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 Message-ID: Two KVMs created , each one having 2 virtual NICs (eth0,eth1) Answer file here http://textuploader.com/9a32 Openstack-status here http://textuploader.com/9a3e Looks not bad , except dead Neutron-Server Neutron.conf here http://textuploader.com/9aiy Stack trace in /var/log/neutron.log
2014-05-17 18:32:12.138 9365 INFO neutron.openstack.common.rpc.common [-] Connected to AMQP server on 192.168.122.127:5672 2014-05-17 18:32:12.138 9365 ERROR neutron.openstack.common.rpc.common [-] Returning exception 'NoneType' object is not callable to caller 2014-05-17 18:32:12.138 9365 ERROR neutron.openstack.common.rpc.common [-] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 462, in _process_data\n **args)\n', ' File "/usr/lib/python2.6/site-packages/neutron/common/rpc.py", line 45, in dispatch\n neutron_ctxt, version, method, namespace, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 92, in get_active_networks_info\n networks = self._get_active_networks(context, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", line 38, in _get_active_networks\n plugin = manager.NeutronManager.get_plugin()\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in get_plugin\n return cls.get_instance().plugin\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in get_instance\n cls._create_instance()\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__\n self.gen.throw(type, value, traceback)\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 212, in lock\n yield sem\n', ' File "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in _create_instance\n cls._instance = cls()\n', ' File "/usr/lib/python2.6/site-packages/neutron/manager.py", line 110, in __init__\n LOG.info(_("Loading core plugin: %s"), plugin_provider)\n', "TypeError: 'NoneType' object is not callable\n"] -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjones at redhat.com Sat May 17 16:10:47 2014 From: rjones at redhat.com (Richard W.M. Jones) Date: Sat, 17 May 2014 17:10:47 +0100 Subject: [Rdo-list] Automatic resizing of root partitions in RDO Icehouse In-Reply-To: References: <20140506083901.GA12668@tesla.redhat.com> <20140508034830.GB26928@tesla.redhat.com> Message-ID: <20140517161047.GA23233@redhat.com> On Fri, May 09, 2014 at 01:04:56PM +0000, St. George, Allan L. wrote: > I'm sure someone could make this better, but this is what I've been > using and it works well: [...] Here's a shorter and faster method. It needs virt-builder 1.26 which is in Fedora >= 20. You don't need to (and should not) run it as root. virt-builder centos-6 \ --size 10G \ --format qcow2 \ --root-password password:123456 \ --run-command 'rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm https://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-7.noarch.rpm' \ --update \ --install cloud-utils,cloud-init,parted,git,puppet \ --edit '/etc/cloud/cloud.cfg: s/(cloud_init_modules)/$1\n - resolv-conf/' \ --edit '/etc/ssh/sshd_config: s/.*PermitRootLogin.*/PermitRootLogin yes/; s/.*PasswordAuthentication.*/PasswordAuthentication yes/' Well there's some puppet configuration that I got bored converting, but you can use --edit or --run-command or --firstboot-command to do essentially arbitrary customization. There's much more information in the manual. I guarantee this will run at least an order of magnitude faster or your money back. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top From bderzhavets at hotmail.com Sat May 17 17:45:15 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 17 May 2014 13:45:15 -0400 Subject: [Rdo-list] Failure to run yum update on F20 due to openstack-neutron-2013.2.3-2 dependency problem In-Reply-To: <53775450.3030703@redhat.com> References: , <53775450.3030703@redhat.com> Message-ID: > Please provide feedback on that to expediate the propagation. Did #rpm -Uvh http://kojipkgs.fedoraproject.org/packages/python-neutronclient/2.3.4/1.fc20/noarch/python-neutronclient-2.3.4-1.fc20.noarch.rpm # yum -y update With no problems Thanks. B. > Date: Sat, 17 May 2014 13:21:36 +0100 > From: pbrady at redhat.com > To: bderzhavets at hotmail.com > CC: rdo-list at redhat.com > Subject: Re: [Rdo-list] Failure to run yum update on F20 due to openstack-neutron-2013.2.3-2 dependency problem > > On 05/17/2014 03:56 AM, Boris Derzhavets wrote: > > # yum update > > > > ---> Package python-neutron.noarch 0:2013.2.3-2.fc20 will be an update > > --> Processing Dependency: python-neutronclient >= 2.3.4 for package: python-neutron-2013.2.3-2.fc20.noarch > > --> Finished Dependency Resolution > > Error: Package: python-neutron-2013.2.3-2.fc20.noarch (updates) > > Requires: python-neutronclient >= 2.3.4 > > Installed: python-neutronclient-2.3.1-3.fc20.noarch (@updates) > > python-neutronclient = 2.3.1-3.fc20 > > Available: python-neutronclient-2.3.1-2.fc20.noarch (fedora) > > python-neutronclient = 2.3.1-2.fc20 > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest > > > > # yum update --skip-broken > > > > Packages skipped because of dependency problems: > > openstack-neutron-2013.2.3-2.fc20.noarch from updates > > openstack-neutron-openvswitch-2013.2.3-2.fc20.noarch from updates > > python-eventlet-0.14.0-1.fc20.noarch from updates > > python-greenlet-0.4.2-1.fc20.x86_64 from updates > > python-neutron-2013.2.3-2.fc20.noarch from updates > > > > Thanks. > > B. > > The new neutronclient is needed for http://pad.lv/1280941 apparently. > > For now please install manually like: > rpm -Uvh http://kojipkgs.fedoraproject.org/packages/python-neutronclient/2.3.4/1.fc20/noarch/python-neutronclient-2.3.4-1.fc20.noarch.rpm > > I've submitted the update now: > https://admin.fedoraproject.org/updates/python-neutronclient-2.3.4-1.fc20 > Please provide feedback on that to expediate the propagation. > > sorry for trouble. > > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat May 17 18:20:35 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 17 May 2014 14:20:35 -0400 Subject: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 In-Reply-To: References: Message-ID: /var/log/neutron/server.log is attached. From: bderzhavets at hotmail.com To: rdo-list at redhat.com Date: Sat, 17 May 2014 11:05:39 -0400 Subject: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 Two KVMs created , each one having 2 virtual NICs (eth0,eth1) Answer file here http://textuploader.com/9a32 Openstack-status here http://textuploader.com/9a3e Looks not bad , except dead Neutron-Server Neutron.conf here http://textuploader.com/9aiy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: server.log.gz Type: application/x-gzip Size: 73484 bytes Desc: not available URL: From bderzhavets at hotmail.com Sat May 17 22:57:48 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 17 May 2014 18:57:48 -0400 Subject: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 (3) In-Reply-To: References: , Message-ID: I removed one line from neutron.conf :- service_plugins =neutron.services.loadbalancer.plugin.LoadBalancerPlugin Restarted neutron-server OK. Got working TwoNode Controller+Compute Neutron OVS&VLAN IceHouse on CentOS 6.5. Answer-file is the same as for Havana same setup Sorry for noise Boris From: bderzhavets at hotmail.com To: rdo-list at redhat.com Date: Sat, 17 May 2014 14:20:35 -0400 Subject: Re: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 /var/log/neutron/server.log is attached. From: bderzhavets at hotmail.com To: rdo-list at redhat.com Date: Sat, 17 May 2014 11:05:39 -0400 Subject: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 Two KVMs created , each one having 2 virtual NICs (eth0,eth1) Answer file here http://textuploader.com/9a32 Openstack-status here http://textuploader.com/9a3e Looks not bad , except dead Neutron-Server Neutron.conf here http://textuploader.com/9aiy _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon May 19 07:06:33 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 19 May 2014 12:36:33 +0530 Subject: [Rdo-list] enable dhcp on public network In-Reply-To: References: Message-ID: <20140519070633.GN6868@tesla.pnq.redhat.com> On Tue, May 13, 2014 at 12:54:22PM +0200, Victor Barba wrote: > Hi, > > This is my first post. Then forgive me if this is off-topic for this list > and ignore it :) This is the right list, assuming you're using RDO packages :-) > > I need to assign public ips directly to my instances (not using floating > ips). The packstack installation out-of-the-box do not enable dhcp on the > public_net and then the ips are not assigned to the instances. How could I > solve this? > > To be clear I need this: > --------- eth0 (192.168.66.1) > | > | (br0 - 192.168.55.1) ----------------- VM (192.168.55.2) > VM > (192.168.55.3) > > VM get ip by dhcp and gw is 192.168.55.1 > > eth0 and br0 have ip_forwarding enabled. > > Thank you in advance. I haven't tried this myself, maybe you can try Flat networking?[1 in combination with provider network. How to configure is here[2] [1] http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-single-flat.html [2] http://docs.openstack.org/havana/install-guide/install/apt/content/demo_flat_logical_network_config.html -- /kashyap From ihrachys at redhat.com Mon May 19 11:18:00 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 19 May 2014 13:18:00 +0200 Subject: [Rdo-list] Attempt to setup Two Node IceHouse Controller+Compute Neutron OVS&VLAN on CentOS 6.5 In-Reply-To: References: Message-ID: <5379E868.1010103@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 You've hit a known bug in packaged configuration files that should be fixed as of openstack-neutron-2014.1-16. Cheers, /Ihar On 17/05/14 17:05, Boris Derzhavets wrote: > Two KVMs created , each one having 2 virtual NICs (eth0,eth1) > Answer file here http://textuploader.com/9a32 > > Openstack-status here http://textuploader.com/9a3e > > Looks not bad , except dead Neutron-Server Neutron.conf here > http://textuploader.com/9aiy > > Stack trace in /var/log/neutron.log
> > 2014-05-17 18:32:12.138 9365 INFO > neutron.openstack.common.rpc.common [-] Connected to AMQP server on > 192.168.122.127:5672 2014-05-17 18:32:12.138 9365 ERROR > neutron.openstack.common.rpc.common [-] Returning exception > 'NoneType' object is not callable to caller 2014-05-17 18:32:12.138 > 9365 ERROR neutron.openstack.common.rpc.common [-] ['Traceback > (most recent call last):\n', ' File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", > line 462, in _process_data\n **args)\n', ' File > "/usr/lib/python2.6/site-packages/neutron/common/rpc.py", line 45, > in dispatch\n neutron_ctxt, version, method, namespace, > **kwargs)\n', ' File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/dispatcher.py", > line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, > **kwargs)\n', ' File > "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", > line 92, in get_active_networks_info\n networks = > self._get_active_networks(context, **kwargs)\n', ' File > "/usr/lib/python2.6/site-packages/neutron/db/dhcp_rpc_base.py", > line 38, in _get_active_networks\n plugin = > manager.NeutronManager.get_plugin()\n', ' File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 211, in > get_plugin\n return cls.get_instance().plugin\n', ' File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 206, in > get_instance\n cls._create_instance()\n', ' File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", > line 249, in inner\n return f(*args, **kwargs)\n', ' File > "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__\n > self.gen.throw(type, value, traceback)\n', ' File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", > line 212, in lock\n yield sem\n', ' File > "/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py", > line 249, in inner\n return f(*args, **kwargs)\n', ' File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 200, in > _create_instance\n cls._instance = cls()\n', ' File > "/usr/lib/python2.6/site-packages/neutron/manager.py", line 110, in > __init__\n LOG.info(_("Loading core plugin: %s"), > plugin_provider)\n', "TypeError: 'NoneType' object is not > callable\n"] > > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTeehoAAoJEC5aWaUY1u57u4oIAJnVYbm5gFRCb78WJAM1LduU Ss+KOuvaq/sgIzjRbxYtIchKTacJ3MBhZ4kbzymHxZiswaLSOO5n+qtK2sfGO/Q/ dKyjgu8dPzM9yMMGj3mYIhCBArRBt7V7vP6nkJh1C30rhDiB10j3Df0FQMGGwmM1 u+JCNwCdjfdckeJ2tXh3wS9ArP+asp4TLP3RM3N9bIHg8Wk5WI5w8/5912+PEei2 tapfMUJxXi9pYpQN+iP94nEfK0y2cvq4B+WcEmvf7nPlTL83pnISFzB8HQdjbbGp M9k2BKpUSFxh6OBXx/LeKYUT1Hrb14yj5zaRjbzbOA1Vgq80eNx7AngE5y9ctT4= =GF+W -----END PGP SIGNATURE----- From P at draigbrady.com Mon May 19 11:29:05 2014 From: P at draigbrady.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Mon, 19 May 2014 12:29:05 +0100 Subject: [Rdo-list] [Openstack-operators] Installing Openstack IceHouse by RDO. Cinder database in empty. In-Reply-To: References: Message-ID: <5379EB01.8050507@draigBrady.com> On 05/19/2014 09:27 AM, ?? wrote: > I installed Openstack IceHouse followed by http://openstack.redhat.com/Quickstart. There is no error message when I run "packstack --allinone". But the database of cinder is empty > mysql> use cinder > Database changed > mysql> show tables; > Empty set (0.00 sec) > Could someone gives me some advice? Thanks a lot. Weird that no errors were reported. Is there anything of note in, or could you attach: /var/tmp/packstack/*/manifests/*cinder.pp.log Let's analyse to see if we can adjust anything allowing us to rerun packstack --answer-file=... to sync up the cinder DB. Hopefully we won't need this but just for completeness to init the DB outside of packstack on RDO you can do: openstack-db --service cinder --init Passwords for that can be seen with: grep 'PW' /root/keystonerc_admin I've CC'd the RDO specific mailing list. thanks, P?draig. From ayoung at redhat.com Tue May 20 03:39:16 2014 From: ayoung at redhat.com (Adam Young) Date: Mon, 19 May 2014 23:39:16 -0400 Subject: [Rdo-list] LDAP configuration In-Reply-To: <20140516061010.GJ6868@tesla.pnq.redhat.com> References: <5374C72F.8090204@leidos.com> <9E554D68-AD02-4082-A3D2-E873B8F1379C@soe.ucsc.edu> <20140516061010.GJ6868@tesla.pnq.redhat.com> Message-ID: <537ACE64.4020400@redhat.com> On 05/16/2014 02:13 AM, Kashyap Chamarthy wrote: > [Adding Adam Young and Robert Crittenden, as they may have some > suggestions.] > > On Thu, May 15, 2014 at 09:02:56AM -0700, Erich Weiler wrote: >> I second this request - I'm also extremely interested in plugging >> keystone into an existing LDAP DIT. I was hoping that I could use >> pre-existing accounts in LDAP and maybe just add some attributes or >> something along those lines for roles, tenants, etc... >> >> Is that how it works? Pretty much: LDAP should be for Users and Groups, and the rest in SQL. You do need service users, though, which can be an issue in some organizations. > I haven't tried LDAP w/ Keystone yet, but here are some references that > might come in handy: > > - Configuring Keystone for LDAP backend[1] > - LDAP configuration notes for Keystone from Grizzly release[2][3] > - Keystone integration w/ FreeIPA project where Tenants, and Roles are managed > by Keystone > > > [1] http://docs.openstack.org/admin-guide-cloud/content/configuring-keystone-for-ldap-backend.html > [2] http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html > [3] http://docs.openstack.org/grizzly/openstack-compute/admin/content/reference-for-ldap-config-options.html > [4] http://openstack.redhat.com/Keystone_integration_with_IDM > >>> On May 15, 2014, at 6:54 AM, "Devine, Patrick D." >>> wrote: >>> >>> All, >>> >>> I have deployed the Havana version of Openstack via Foreman. However >>> now I want to switch Keystone to utilize my LDAP server for >>> authentication vs MySQL. I have followed the instructions for >>> configuring the keystone.conf to point at my server but I haven't >>> seen any documentation on how the LDAP should be populated. For >>> example do I have to re-create all the user accounts for each >>> openstack module? I get that I need to have a people, role, and >>> project set up but there is nothing about what users are needed, how >>> they relate to the project and roles. >>> >>> Has anyone got their Openstack working with LDAP and if so what does >>> you ldap look like? >>> > From P at draigbrady.com Tue May 20 09:56:09 2014 From: P at draigbrady.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Tue, 20 May 2014 10:56:09 +0100 Subject: [Rdo-list] [Openstack-operators] Installing Openstack IceHouse by RDO. Cinder database in empty. In-Reply-To: <5379EB01.8050507@draigBrady.com> References: <5379EB01.8050507@draigBrady.com> Message-ID: <537B26B9.4060704@draigBrady.com> On 05/19/2014 12:29 PM, P?draig Brady wrote: > On 05/19/2014 09:27 AM, ?? wrote: >> I installed Openstack IceHouse followed by http://openstack.redhat.com/Quickstart. There is no error message when I run "packstack --allinone". But the database of cinder is empty >> mysql> use cinder >> Database changed >> mysql> show tables; >> Empty set (0.00 sec) >> Could someone gives me some advice? Thanks a lot. > > Weird that no errors were reported. > Is there anything of note in, or could you attach: > > /var/tmp/packstack/*/manifests/*cinder.pp.log > > Let's analyse to see if we can adjust anything allowing > us to rerun packstack --answer-file=... to sync up the cinder DB. > > Hopefully we won't need this but just for completeness > to init the DB outside of packstack on RDO you can do: > openstack-db --service cinder --init > Passwords for that can be seen with: > grep 'PW' /root/keystonerc_admin > > I've CC'd the RDO specific mailing list. This was resolved as a local issue due to incorrect setting in /etc/hosts. Cinder at least seems to be dependent on the correct DNS entry being present for the host being installed. thanks, P?draig. From bderzhavets at hotmail.com Tue May 20 15:00:36 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 20 May 2014 11:00:36 -0400 Subject: [Rdo-list] Network is slow on Compute Node of Two Node IceHouse Neutron OVS&VLAN CentOS 6.5 Cluster In-Reply-To: <537B26B9.4060704@draigBrady.com> References: , <5379EB01.8050507@draigBrady.com>, <537B26B9.4060704@draigBrady.com> Message-ID: Posted at ask.openstack.org https://ask.openstack.org/en/question/30090/network-is-slow-on-compute-node-of-two-node-icehouse-neutron-ovsvlan-centos-65-cluster/ Thanks Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eberg at rubensteintech.com Tue May 20 17:17:21 2014 From: eberg at rubensteintech.com (Eric Berg) Date: Tue, 20 May 2014 13:17:21 -0400 Subject: [Rdo-list] Can't ping/ssh to new instance Message-ID: <537B8E21.90300@rubensteintech.com> I've done a fresh install of RDO using packstack on a single host like this: packstack --allinone --provision-all-in-one-ovs-bridge=n And then followed the instructions here: http://openstack.redhat.com/Neutron_with_existing_external_network I've also generally followed Lars's approach from this video with the same lack of connectivity: https://www.youtube.com/watch?v=DGf-ny25OAw My public network is 192.168.20.0/24. But I'm not able to ping or ssh from my 1902.168.0.0 network, the host running OpenStack is at 192.168.0.37. My instance is up and running with a 10.0.0.2 IP and 192.168.20.4 floating IP. I can ping 192.168.20.3, but not 192.168.20.4. I can use the net namespace approach to log into my cirros instance, but can't get to 192.168.20.0/24 hosts. This is my first OpenStack install. I'm a little confused at how a stock installation (based on packstack) could somehow not include the ability to access the VMs from the network on which the OS compute host is running. Any help troubleshooting this would be greatly appreciated. Eric -- Eric Berg Sr. Software Engineer Rubenstein Technology Group eberg at rubensteintech.com www.rubensteintech.com From kchamart at redhat.com Wed May 21 08:24:14 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 21 May 2014 13:54:14 +0530 Subject: [Rdo-list] Can't ping/ssh to new instance In-Reply-To: <537B8E21.90300@rubensteintech.com> References: <537B8E21.90300@rubensteintech.com> Message-ID: <20140521082414.GR6868@tesla.pnq.redhat.com> On Tue, May 20, 2014 at 01:17:21PM -0400, Eric Berg wrote: > I've done a fresh install of RDO using packstack on a single host like this: > > packstack --allinone --provision-all-in-one-ovs-bridge=n > > And then followed the instructions here: > > http://openstack.redhat.com/Neutron_with_existing_external_network > > I've also generally followed Lars's approach from this video with the same > lack of connectivity: https://www.youtube.com/watch?v=DGf-ny25OAw > > My public network is 192.168.20.0/24. > > But I'm not able to ping or ssh from my 1902.168.0.0 network, the host > running OpenStack is at 192.168.0.37. > > My instance is up and running with a 10.0.0.2 IP and 192.168.20.4 floating > IP. > > I can ping 192.168.20.3, but not 192.168.20.4. > > I can use the net namespace approach to log into my cirros instance, but > can't get to 192.168.20.0/24 hosts. That at-sounds you've got most of it right. You're not able to SSH via floating IPs. Couple of things: - You might want to check if your iptables rules are correct. i.e. when you run something like this, you should see SNAT/DNAT rules: $ ip netns exec qrouter-2c7ba7dc-0101-417a-b76d-1cae17ae654e iptables -t nat -L -nv | grep NAT 0 0 DNAT all -- * * 0.0.0.0/0 192.169.142.12 to:30.0.0.26 0 0 DNAT all -- * * 0.0.0.0/0 192.169.142.13 to:30.0.0.25 26 1704 ACCEPT all -- !qg-fb9ff0ad-56 !qg-fb9ff0ad-56 0.0.0.0/0 0.0.0.0/0 ! ctstate DNAT 0 0 DNAT all -- * * 0.0.0.0/0 192.169.142.12 to:30.0.0.26 5 324 DNAT all -- * * 0.0.0.0/0 192.169.142.13 to:30.0.0.25 0 0 SNAT all -- * * 30.0.0.26 0.0.0.0/0 to:192.169.142.12 0 0 SNAT all -- * * 30.0.0.25 0.0.0.0/0 to:192.169.142.13 0 0 SNAT all -- * * 30.0.0.0/24 0.0.0.0/0 to:192.169.142.10 - Ensure you have security group rules for SSH are set correctly (you can enumerate them by doing '$ neutron security-group-rule-list') I recently did a 2-node IceHouse install (but this is manual setup), here[1] are my configurations of Nova/Neutron and iptables rules (scroll down to bottom). > This is my first OpenStack install. I'm a little confused at how a > stock installation (based on packstack) could somehow not include the > ability to access the VMs from the network on which the OS compute > host is running. > > Any help troubleshooting this would be greatly appreciated. [1] http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt -- /kashyap From eberg at rubensteintech.com Wed May 21 14:25:06 2014 From: eberg at rubensteintech.com (Eric Berg) Date: Wed, 21 May 2014 10:25:06 -0400 Subject: [Rdo-list] Can't ping/ssh to new instance In-Reply-To: <20140521082414.GR6868@tesla.pnq.redhat.com> References: <537B8E21.90300@rubensteintech.com> <20140521082414.GR6868@tesla.pnq.redhat.com> Message-ID: <537CB742.4010504@rubensteintech.com> Thanks, Kashyap. I have made some progress in that I was able to connect to my cirros image from the public network, but only from the host on which openstack is installed and on which the instance is running. At the end of Lars's video, mentioned below, he assigns a gateway ip address to the public (192.168.20.0/24) network to the br-ex device, and then adds a rule that I translated into this command: iptables -t nat -I POSTROUTING 1 -s 192.168.20.0/24 -j MASQUERADE but this breaks the connectivity, so I removed that so that I could still ssh into the cirros instance from my physical host. Currently, I'm able to log in from the openstack physical host, but not from the rest of my 192.168.0.0 network. My networking is a little bit rusty, so I'm not sure what the next step is to allow me to log into the instances on the 192.168.20.0/24 network from existing hosts on the 192.168.0.0 network. BTW, is there a script that will provide a dump of the configurations like the one for which you provided a URL below? Thanks again. Eric On 5/21/14, 4:24 AM, Kashyap Chamarthy wrote: > On Tue, May 20, 2014 at 01:17:21PM -0400, Eric Berg wrote: >> I've done a fresh install of RDO using packstack on a single host like this: >> >> packstack --allinone --provision-all-in-one-ovs-bridge=n >> >> And then followed the instructions here: >> >> http://openstack.redhat.com/Neutron_with_existing_external_network >> >> I've also generally followed Lars's approach from this video with the same >> lack of connectivity: https://www.youtube.com/watch?v=DGf-ny25OAw >> >> My public network is 192.168.20.0/24. >> >> But I'm not able to ping or ssh from my 1902.168.0.0 network, the host >> running OpenStack is at 192.168.0.37. >> >> My instance is up and running with a 10.0.0.2 IP and 192.168.20.4 floating >> IP. >> >> I can ping 192.168.20.3, but not 192.168.20.4. >> >> I can use the net namespace approach to log into my cirros instance, but >> can't get to 192.168.20.0/24 hosts. > That at-sounds you've got most of it right. You're not able to SSH via > floating IPs. > > Couple of things: > > - You might want to check if your iptables rules are correct. i.e. when > you run something like this, you should see SNAT/DNAT rules: > > $ ip netns exec qrouter-2c7ba7dc-0101-417a-b76d-1cae17ae654e iptables -t nat -L -nv | grep NAT > 0 0 DNAT all -- * * 0.0.0.0/0 192.169.142.12 to:30.0.0.26 > 0 0 DNAT all -- * * 0.0.0.0/0 192.169.142.13 to:30.0.0.25 > 26 1704 ACCEPT all -- !qg-fb9ff0ad-56 !qg-fb9ff0ad-56 0.0.0.0/0 0.0.0.0/0 ! ctstate DNAT > 0 0 DNAT all -- * * 0.0.0.0/0 192.169.142.12 to:30.0.0.26 > 5 324 DNAT all -- * * 0.0.0.0/0 192.169.142.13 to:30.0.0.25 > 0 0 SNAT all -- * * 30.0.0.26 0.0.0.0/0 to:192.169.142.12 > 0 0 SNAT all -- * * 30.0.0.25 0.0.0.0/0 to:192.169.142.13 > 0 0 SNAT all -- * * 30.0.0.0/24 0.0.0.0/0 to:192.169.142.10 > > > - Ensure you have security group rules for SSH are set correctly (you > can enumerate them by doing '$ neutron security-group-rule-list') > > I recently did a 2-node IceHouse install (but this is manual setup), > here[1] are my configurations of Nova/Neutron and iptables rules (scroll > down to bottom). > > >> This is my first OpenStack install. I'm a little confused at how a >> stock installation (based on packstack) could somehow not include the >> ability to access the VMs from the network on which the OS compute >> host is running. >> >> Any help troubleshooting this would be greatly appreciated. > > [1] http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt > -- Eric Berg Sr. Software Engineer Rubenstein Technology Group 55 Broad Street, 14th Floor New York, NY 10004-2501 (212) 518-6400 (212) 518-6467 fax eberg at rubensteintech.com www.rubensteintech.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 21 17:31:00 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 21 May 2014 13:31:00 -0400 (EDT) Subject: [Rdo-list] Can't ping/ssh to new instance In-Reply-To: <537CB742.4010504@rubensteintech.com> References: <537B8E21.90300@rubensteintech.com> <20140521082414.GR6868@tesla.pnq.redhat.com> <537CB742.4010504@rubensteintech.com> Message-ID: <1312910672.7838414.1400693460854.JavaMail.zimbra@redhat.com> > Thanks, Kashyap. > > I have made some progress in that I was able to connect to my cirros > image from the public network, but only from the host on which openstack > is installed and on which the instance is running. > > At the end of Lars's video, mentioned below, he assigns a gateway ip > address to the public (192.168.20.0/24) network to the br-ex device, and > then adds a rule that I translated into this command: > > iptables -t nat -I POSTROUTING 1 -s 192.168.20.0/24 -j MASQUERADE > > > but this breaks the connectivity, so I removed that so that I could > still ssh into the cirros instance from my physical host. > > Currently, I'm able to log in from the openstack physical host, but not > from the rest of my 192.168.0.0 network. > > My networking is a little bit rusty, so I'm not sure what the next step > is to allow me to log into the instances on the 192.168.20.0/24 network > from existing hosts on the 192.168.0.0 network. I'm only just shooting in the dark -- it could possibly be some routing issues as you're using default libvirt network for Floating IP. FWIW, I usually create a non-default libvirt network like that[1] for OpenStack setups (as my Controller/Compute nodes themselves are virtual machines). [1] http://kashyapc.fedorapeople.org/virt/create-a-new-libvirt-bridge.txt > > BTW, is there a script that will provide a dump of the configurations > like the one for which you provided a URL below? No, I just generated it manually and indented it a bit for readability. But a trivial shell script can be written to that effect. /kashyap > > On 5/21/14, 4:24 AM, Kashyap Chamarthy wrote: > > On Tue, May 20, 2014 at 01:17:21PM -0400, Eric Berg wrote: > >> I've done a fresh install of RDO using packstack on a single host like > >> this: > >> > >> packstack --allinone --provision-all-in-one-ovs-bridge=n > >> > >> And then followed the instructions here: > >> > >> http://openstack.redhat.com/Neutron_with_existing_external_network > >> > >> I've also generally followed Lars's approach from this video with the same > >> lack of connectivity: https://www.youtube.com/watch?v=DGf-ny25OAw > >> > >> My public network is 192.168.20.0/24. > >> > >> But I'm not able to ping or ssh from my 1902.168.0.0 network, the host > >> running OpenStack is at 192.168.0.37. > >> > >> My instance is up and running with a 10.0.0.2 IP and 192.168.20.4 floating > >> IP. > >> > >> I can ping 192.168.20.3, but not 192.168.20.4. > >> > >> I can use the net namespace approach to log into my cirros instance, but > >> can't get to 192.168.20.0/24 hosts. > > That at-sounds you've got most of it right. You're not able to SSH via > > floating IPs. > > > > Couple of things: > > > > - You might want to check if your iptables rules are correct. i.e. when > > you run something like this, you should see SNAT/DNAT rules: > > > > $ ip netns exec qrouter-2c7ba7dc-0101-417a-b76d-1cae17ae654e iptables > > -t nat -L -nv | grep NAT > > 0 0 DNAT all -- * * 0.0.0.0/0 > > 192.169.142.12 to:30.0.0.26 > > 0 0 DNAT all -- * * 0.0.0.0/0 > > 192.169.142.13 to:30.0.0.25 > > 26 1704 ACCEPT all -- !qg-fb9ff0ad-56 !qg-fb9ff0ad-56 > > 0.0.0.0/0 0.0.0.0/0 ! ctstate DNAT > > 0 0 DNAT all -- * * 0.0.0.0/0 > > 192.169.142.12 to:30.0.0.26 > > 5 324 DNAT all -- * * 0.0.0.0/0 > > 192.169.142.13 to:30.0.0.25 > > 0 0 SNAT all -- * * 30.0.0.26 > > 0.0.0.0/0 to:192.169.142.12 > > 0 0 SNAT all -- * * 30.0.0.25 > > 0.0.0.0/0 to:192.169.142.13 > > 0 0 SNAT all -- * * 30.0.0.0/24 > > 0.0.0.0/0 to:192.169.142.10 > > > > > > - Ensure you have security group rules for SSH are set correctly (you > > can enumerate them by doing '$ neutron security-group-rule-list') > > > > I recently did a 2-node IceHouse install (but this is manual setup), > > here[1] are my configurations of Nova/Neutron and iptables rules (scroll > > down to bottom). > > > > > >> This is my first OpenStack install. I'm a little confused at how a > >> stock installation (based on packstack) could somehow not include the > >> ability to access the VMs from the network on which the OS compute > >> host is running. > >> > >> Any help troubleshooting this would be greatly appreciated. > > > > [1] > > http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt > > > > -- > Eric Berg > Sr. Software Engineer > Rubenstein Technology Group > 55 Broad Street, 14th Floor > New York, NY 10004-2501 > > (212) 518-6400 > (212) 518-6467 fax > eberg at rubensteintech.com > www.rubensteintech.com > > From daniel at speichert.pl Wed May 21 17:42:14 2014 From: daniel at speichert.pl (Daniel Speichert) Date: Wed, 21 May 2014 13:42:14 -0400 Subject: [Rdo-list] Creating nova instance without backing image Message-ID: <537CE576.4000600@speichert.pl> Hi, In our OpenStack installation (Havana, RDO), we use Ceph as storage as Nova, Glance and Cinder backend. On compute nodes, instance's disks are kept on Ceph but base images are copied from Glance (Ceph) to local disk. This works well with small images of OS's. However, if we upload a big image (e.g. migrating bare hardware system to the cloud) that is only used by one instance, the image becomes the backing file stored locally on the compute node. Here, the compute node hit disk space limit because we expect all the data to reside on Ceph. Is there any way to tell Nova to not use backing image for an instance? Maybe there exists a special image property to set that tells Nova to copy the whole image, which would put it entirely in Ceph without a backing file? I hope this use case makes sense for and I'd appreciate if you had any suggestions how to resolve this issue. Best, Daniel From lars at redhat.com Wed May 21 17:54:41 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 21 May 2014 13:54:41 -0400 Subject: [Rdo-list] neutron-openvswitch-agent resetting MAC address on my bridges Message-ID: <20140521175441.GC11456@redhat.com> Someone reported some problems running through: http://openstack.redhat.com/Neutron_with_existing_external_network I thought I would walk through it myself first to make sure that (a) it was correct and (b) that I remembered all the steps, but I've run into a puzzling problem: My target system is itself an OpenStack instance, which means that once br-ex is configured it really need to have the MAC address that was previously exposed by eth0, because otherwise traffic will be blocked by the MAC filtering rules attached to the instance's tap device: -A neutron-openvswi-s55439d7d-a -s 10.0.0.8/32 -m mac --mac-source FA:16:3E:EF:91:EC -j RETURN -A neutron-openvswi-s55439d7d-a -j DROP I have set MACADDR in br-ex, which works just fine until I restart neutron-openvswitch-agent (or, you know, reboot the instance), at which point the MAC address on br-ex changes any everything stops working. I've been poking through the code for a bit and I can't find either the source or an explanation for this behavior. It would be great if a wiser set of eyes could shed some light on this. Cheers, -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lars at redhat.com Wed May 21 18:44:22 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 21 May 2014 14:44:22 -0400 Subject: [Rdo-list] Creating nova instance without backing image In-Reply-To: <537CE576.4000600@speichert.pl> References: <537CE576.4000600@speichert.pl> Message-ID: <20140521184422.GE11456@redhat.com> On Wed, May 21, 2014 at 01:42:14PM -0400, Daniel Speichert wrote: > However, if we upload a big image (e.g. migrating bare hardware system > to the cloud) that is only used by one instance, the image becomes the > backing file stored locally on the compute node. Here, the compute node > hit disk space limit because we expect all the data to reside on Ceph. > > Is there any way to tell Nova to not use backing image for an instance? I don't know the answer to that question...but since you're already using Ceph, couldn't you simply put your instance storage on Ceph (or some other remote/cluster filesystem) as well? This would still mean that for "single use" images you're using twice the space, but it would remove the space limitations of the local filesystem. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From pbrady at redhat.com Wed May 21 20:11:19 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 21 May 2014 21:11:19 +0100 Subject: [Rdo-list] Creating nova instance without backing image In-Reply-To: <537CE576.4000600@speichert.pl> References: <537CE576.4000600@speichert.pl> Message-ID: <537D0867.3000102@redhat.com> On 05/21/2014 06:42 PM, Daniel Speichert wrote: > Hi, > > In our OpenStack installation (Havana, RDO), we use Ceph as storage as > Nova, Glance and Cinder backend. > > On compute nodes, instance's disks are kept on Ceph but base images are > copied from Glance (Ceph) to local disk. This works well with small > images of OS's. > > However, if we upload a big image (e.g. migrating bare hardware system > to the cloud) that is only used by one instance, the image becomes the > backing file stored locally on the compute node. Here, the compute node > hit disk space limit because we expect all the data to reside on Ceph. > > Is there any way to tell Nova to not use backing image for an instance? > Maybe there exists a special image property to set that tells Nova to > copy the whole image, which would put it entirely in Ceph without a > backing file? > > I hope this use case makes sense for and I'd appreciate if you had any > suggestions how to resolve this issue. > > Best, > Daniel That's an interesting though unusual case. So the traditional case is where one stores ideally small image templates in glance. Is there any way to use something like virt-sparsify to create a small(er) cow2 image in glance? With the default nova config that should result in sparse files on the compute node (details at http://www.pixelbeat.org/docs/openstack_libvirt_images/) There has been consideration for more efficient interaction between glance and nova at: https://blueprints.launchpad.net/glance/+spec/multiple-image-locations https://blueprints.launchpad.net/nova/+spec/nova-image-zero-copy That would allow glance to essentially just send an image location to nova, and then nova (plugins) could process as desired. However it seems that functionality is not yet completed. thanks, P?draig. From iida.koji at lab.ntt.co.jp Thu May 22 07:37:03 2014 From: iida.koji at lab.ntt.co.jp (Koji IIDA) Date: Thu, 22 May 2014 16:37:03 +0900 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse Message-ID: <537DA91F.9000706@lab.ntt.co.jp> Hi, I'm just installed RDO Icehouse on my CentOS 6.5 box using packstack. I found that the version of python-kombu does not satisfy the requirement of Icehouse. python-kombu installed by packstack: python-kombu.noarch 1.1.3-2.el6 @openstack-icehouse nova (Icehouse ver) requirement.txt: kombu>=2.4.8 (*) So, I want to update kombu to version 2.4.8 or above. Any suggestions? (*1) https://github.com/openstack/nova/blob/stable/icehouse/requirements.txt#L9 --- Koji IIDA From ihrachys at redhat.com Thu May 22 08:29:15 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Thu, 22 May 2014 10:29:15 +0200 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse In-Reply-To: <537DA91F.9000706@lab.ntt.co.jp> References: <537DA91F.9000706@lab.ntt.co.jp> Message-ID: <537DB55B.1060302@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 22/05/14 09:37, Koji IIDA wrote: > Hi, > > I'm just installed RDO Icehouse on my CentOS 6.5 box using > packstack. I found that the version of python-kombu does not > satisfy the requirement of Icehouse. > > > python-kombu installed by packstack: python-kombu.noarch > 1.1.3-2.el6 @openstack-icehouse > > nova (Icehouse ver) requirement.txt: kombu>=2.4.8 (*) > > So, I want to update kombu to version 2.4.8 or above. > Do you see any issues with CentOS 6.5 version of the module? > Any suggestions? > You can try to install one of RPMs from Koji: http://koji.fedoraproject.org/koji/packageinfo?packageID=12243 > > > (*1) > https://github.com/openstack/nova/blob/stable/icehouse/requirements.txt#L9 > > > > > --- Koji IIDA > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJTfbVbAAoJEC5aWaUY1u57cNsIAMS9ch/8aGONAo8jL0dkP6w1 B1R16WVC8bslt2C/ADowyLWCpBC+8qeLn7PHEtxlsWo5L4wth8+rhdC0Y2ctCzNI +thxa+qyMipAFj9TBnav7Xl0z8NutBRV33wz9DfQc3b5twWwx7L4NJVHXonPmLnZ no+G7ZM279WyV4TR+v1akgWp0tOtfgWYxJKWwwymqfdBWLP4io9zqaKVPrcIjGMs unMUkMCgNCM9LC5m4fzaxocbjDr+Zgr8zgEeioJXzd/jAXZ24hjBXNwKb5bxq77G Zm1iOuJBj8NVvtrL+WWdbreUa8z069HMZ1YUOVF3FWeES6eYiUHLjc+Cj8nF98Q= =acqI -----END PGP SIGNATURE----- From bderzhavets at hotmail.com Thu May 22 09:31:53 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 22 May 2014 05:31:53 -0400 Subject: [Rdo-list] Network is slow on Compute Node of Two Node IceHouse Neutron OVS&VLAN CentOS 6.5 Cluster In-Reply-To: References: , , <5379EB01.8050507@draigBrady.com>, , <537B26B9.4060704@draigBrady.com>, Message-ID: Running on Compute Node :- # /sbin/ethtool --offload eth1 tx off Fixed a problem for me. I was able launch Ubuntu 14.04 VM on Compute, then run within VM # apt-get update # apt-get -y install links and surf the Net via links ----------------------------------------------------------------------------------------- From: bderzhavets at hotmail.com To: p at draigbrady.com Date: Tue, 20 May 2014 11:00:36 -0400 CC: rdo-list at redhat.com Subject: [Rdo-list] Network is slow on Compute Node of Two Node IceHouse Neutron OVS&VLAN CentOS 6.5 Cluster Posted at ask.openstack.org https://ask.openstack.org/en/question/30090/network-is-slow-on-compute-node-of-two-node-icehouse-neutron-ovsvlan-centos-65-cluster/ Thanks Boris. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndipanov at redhat.com Thu May 22 10:09:59 2014 From: ndipanov at redhat.com (=?UTF-8?B?Tmlrb2xhIMSQaXBhbm92?=) Date: Thu, 22 May 2014 12:09:59 +0200 Subject: [Rdo-list] [Openstack] rdo havana to icehouse: instances stuck in 'resized or migrated' In-Reply-To: <537B8BAF.3070704@redhat.com> References: <537686F6.5030407@bmrb.wisc.edu> <5379E177.1050304@redhat.com> <537A2E05.7060000@redhat.com> <537A3BE0.7040509@bmrb.wisc.edu> <537B1989.8080701@redhat.com> <537B7DF9.6000806@bmrb.wisc.edu> <537B8BAF.3070704@redhat.com> Message-ID: <537DCCF7.7040802@redhat.com> On 05/20/2014 07:06 PM, Julie Pichon wrote: > On 20/05/14 17:08, Dimitri Maziuk wrote: >> On 05/20/2014 03:59 AM, Julie Pichon wrote: >>> On 19/05/14 18:14, Dimitri Maziuk wrote: >>>> On 05/19/2014 11:15 AM, Julie Pichon wrote: >>>>> >>>>> I had a chat with a Nova developer who pointed me to the following patch >>>>> at https://review.openstack.org/#/c/84755/ , recently merged in Havana >>>>> and included in the latest RDO Havana packages. Resize specifically is >>>>> one of the actions affected by this bug, you might want to check that >>>>> you're running the latest packages on your Havana node(s) and see if >>>>> this might help to resolve the problem? >>>> >>>> [root at irukandji ~]# yum up >>>> ... >>>> No Packages marked for Update >>>> >>>> So -- no, that's not it, unless the patch hasn't made it into the rpms yet. >>> >>> Could you provide the version numbers for the openstack-nova packages? >> >> Icehouse node: >> >> [root at squid ~]# rpm -q -a | grep openstack >> openstack-selinux-0.1.3-2.el6ost.noarch >> openstack-nova-api-2014.1-2.el6.noarch >> openstack-utils-2014.1-1.el6.noarch >> openstack-nova-compute-2014.1-2.el6.noarch >> openstack-nova-network-2014.1-2.el6.noarch >> openstack-nova-common-2014.1-2.el6.noarch >> >> Havana node: >> >> [root at irukandji ~]# rpm -q -a | grep openstack >> openstack-nova-common-2013.2.3-1.el6.noarch >> openstack-nova-network-2013.2.3-1.el6.noarch >> openstack-nova-api-2013.2.3-1.el6.noarch >> openstack-selinux-0.1.3-2.el6ost.noarch >> openstack-nova-compute-2013.2.3-1.el6.noarch >> openstack-utils-2013.2-2.el6.noarch >> >> Both nodes are >> >> [root at squid ~]# cat /etc/redhat-release >> CentOS release 6.5 (Final) >> > > Thanks for the information. I'm adding my colleague Nikola to this > thread, he's familiar with Nova and should be better able to help. > > Julie > Hi Dimitri, So for this kind of upgrade to work, you will need to set an RPC version cap so that all your icehouse computes know they are talking to possibly lower version nodes and downgrade their messages. This can be done (as described in much more detail on [1]) using the [upgrade_levels] section of your nova.conf file. Please let me know if this works and if you need further assistance. Thanks, Nikola ?ipanov, SSE - OpenStack @ Red Hat [1] http://openstack.redhat.com/Upgrading_RDO_To_Icehouse From pbrady at redhat.com Thu May 22 13:27:26 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Thu, 22 May 2014 14:27:26 +0100 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse In-Reply-To: <537DA91F.9000706@lab.ntt.co.jp> References: <537DA91F.9000706@lab.ntt.co.jp> Message-ID: <537DFB3E.6070100@redhat.com> On 05/22/2014 08:37 AM, Koji IIDA wrote: > Hi, > > I'm just installed RDO Icehouse on my CentOS 6.5 box using packstack. > I found that the version of python-kombu does not satisfy the > requirement of Icehouse. > > > python-kombu installed by packstack: > python-kombu.noarch 1.1.3-2.el6 @openstack-icehouse > > nova (Icehouse ver) requirement.txt: > kombu>=2.4.8 (*) > > So, I want to update kombu to version 2.4.8 or above. > > Any suggestions? Note the upstream requirements are often not the same as distro requirements due to varying version combinations and backports. The particular reason kombu requirement was updated upstream was to cater for newer msgpack modules. However the EPEL version of python-msgpack should not have this issue with python-kombu. Details in https://pad.lv/1134575 So does this cause another issue for you? Note EPEL7 is at v2.5.16 (and rawhide is at v3.0.15) so in case it's useful to you I've just rebuilt but not tested v2.5.16 for el6 by doing: fedpkg clone python-kombu git checkout epel7 fedpkg --dist=el6 scratch-build Result is at (for a while): http://koji.fedoraproject.org/koji/taskinfo?taskID=6875634 thanks, P?draig. From dmaziuk at bmrb.wisc.edu Thu May 22 14:20:57 2014 From: dmaziuk at bmrb.wisc.edu (Dmitri Maziuk) Date: Thu, 22 May 2014 09:20:57 -0500 Subject: [Rdo-list] [Openstack] rdo havana to icehouse: instances stuck in 'resized or migrated' In-Reply-To: <537DCCF7.7040802@redhat.com> References: <537686F6.5030407@bmrb.wisc.edu> <5379E177.1050304@redhat.com> <537A2E05.7060000@redhat.com> <537A3BE0.7040509@bmrb.wisc.edu> <537B1989.8080701@redhat.com> <537B7DF9.6000806@bmrb.wisc.edu> <537B8BAF.3070704@redhat.com> <537DCCF7.7040802@redhat.com> Message-ID: <537E07C9.5000804@bmrb.wisc.edu> On 5/22/2014 5:09 AM, Nikola ?ipanov wrote: > So for this kind of upgrade to work, you will need to set an RPC version > cap so that all your icehouse computes know they are talking to possibly > lower version nodes and downgrade their messages. My original e-mail (which I guess you haven't seen) stated "as per fine manual", so yes, I put "compute=icehouse-compat" in there before restarting the daemons. > Please let me know if this works and if you need further assistance. In another e-mail upthread I mentioned I'm moving my VMs back to plain virsh because for the amount of time (times my pay) I already spent on this we could've had a fully supported rollout of vmware -- and it's still not working as well as virsh. So no, I don't think I'll need any further assistance in the foreseeable future, thank you. Dimitri From afazekas at redhat.com Thu May 22 21:41:19 2014 From: afazekas at redhat.com (Attila Fazekas) Date: Thu, 22 May 2014 17:41:19 -0400 (EDT) Subject: [Rdo-list] Can't ping/ssh to new instance In-Reply-To: <537CB742.4010504@rubensteintech.com> References: <537B8E21.90300@rubensteintech.com> <20140521082414.GR6868@tesla.pnq.redhat.com> <537CB742.4010504@rubensteintech.com> Message-ID: <205237177.13012229.1400794879501.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Eric Berg" > To: "Kashyap Chamarthy" > Cc: rdo-list at redhat.com > Sent: Wednesday, May 21, 2014 4:25:06 PM > Subject: Re: [Rdo-list] Can't ping/ssh to new instance > > Thanks, Kashyap. > > I have made some progress in that I was able to connect to my cirros image > from the public network, but only from the host on which openstack is > installed and on which the instance is running. > > At the end of Lars's video, mentioned below, he assigns a gateway ip address > to the public (192.168.20.0/24) network to the br-ex device, and then adds a > rule that I translated into this command: > > iptables -t nat -I POSTROUTING 1 -s 192.168.20.0/24 -j MASQUERADE > Probably you also need to define a route entry for 192.168.20.0/24 in your switch. In my SOHO switch it can be done on the web interface as 'Advanced Routing' -> 'Static Routing List' Destination Network: 192.168.20.0 Subnet Mask: 255.255.255.0 Default Gateway: For example: 192.168.0.42 Also consider adding an outgoing interface -o dev to your MASQUERADE rule. > but this breaks the connectivity, so I removed that so that I could still ssh > into the cirros instance from my physical host. > > Currently, I'm able to log in from the openstack physical host, but not from > the rest of my 192.168.0.0 network. > > My networking is a little bit rusty, so I'm not sure what the next step is to > allow me to log into the instances on the 192.168.20.0/24 network from > existing hosts on the 192.168.0.0 network. > > BTW, is there a script that will provide a dump of the configurations like > the one for which you provided a URL below? > > Thanks again. > > Eric > > On 5/21/14, 4:24 AM, Kashyap Chamarthy wrote: > > > > On Tue, May 20, 2014 at 01:17:21PM -0400, Eric Berg wrote: > > > > I've done a fresh install of RDO using packstack on a single host like this: > > packstack --allinone --provision-all-in-one-ovs-bridge=n > > And then followed the instructions here: > http://openstack.redhat.com/Neutron_with_existing_external_network I've also > generally followed Lars's approach from this video with the same > lack of connectivity: https://www.youtube.com/watch?v=DGf-ny25OAw My public > network is 192.168.20.0/24. > > But I'm not able to ping or ssh from my 1902.168.0.0 network, the host > running OpenStack is at 192.168.0.37. > > My instance is up and running with a 10.0.0.2 IP and 192.168.20.4 floating > IP. > > I can ping 192.168.20.3, but not 192.168.20.4. > > I can use the net namespace approach to log into my cirros instance, but > can't get to 192.168.20.0/24 hosts. > That at-sounds you've got most of it right. You're not able to SSH via > floating IPs. > > Couple of things: > > - You might want to check if your iptables rules are correct. i.e. when > you run something like this, you should see SNAT/DNAT rules: > > $ ip netns exec qrouter-2c7ba7dc-0101-417a-b76d-1cae17ae654e iptables -t > nat -L -nv | grep NAT > 0 0 DNAT all -- * * 0.0.0.0/0 > 192.169.142.12 to:30.0.0.26 > 0 0 DNAT all -- * * 0.0.0.0/0 > 192.169.142.13 to:30.0.0.25 > 26 1704 ACCEPT all -- !qg-fb9ff0ad-56 !qg-fb9ff0ad-56 > 0.0.0.0/0 0.0.0.0/0 ! ctstate DNAT > 0 0 DNAT all -- * * 0.0.0.0/0 > 192.169.142.12 to:30.0.0.26 > 5 324 DNAT all -- * * 0.0.0.0/0 > 192.169.142.13 to:30.0.0.25 > 0 0 SNAT all -- * * 30.0.0.26 > 0.0.0.0/0 to:192.169.142.12 > 0 0 SNAT all -- * * 30.0.0.25 > 0.0.0.0/0 to:192.169.142.13 > 0 0 SNAT all -- * * 30.0.0.0/24 > 0.0.0.0/0 to:192.169.142.10 > > > - Ensure you have security group rules for SSH are set correctly (you > can enumerate them by doing '$ neutron security-group-rule-list') > > I recently did a 2-node IceHouse install (but this is manual setup), > here[1] are my configurations of Nova/Neutron and iptables rules (scroll > down to bottom). > > > > This is my first OpenStack install. I'm a little confused at how a > stock installation (based on packstack) could somehow not include the > ability to access the VMs from the network on which the OS compute > host is running. > > Any help troubleshooting this would be greatly appreciated. > [1] > http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt > > -- > Eric Berg > Sr. Software Engineer > Rubenstein Technology Group > 55 Broad Street, 14th Floor > New York, NY 10004-2501 > > (212) 518-6400 > (212) 518-6467 fax eberg at rubensteintech.com www.rubensteintech.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From iida.koji at lab.ntt.co.jp Fri May 23 03:52:13 2014 From: iida.koji at lab.ntt.co.jp (Koji IIDA) Date: Fri, 23 May 2014 12:52:13 +0900 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse In-Reply-To: <537DFB3E.6070100@redhat.com> References: <537DA91F.9000706@lab.ntt.co.jp> <537DFB3E.6070100@redhat.com> Message-ID: <537EC5ED.8030507@lab.ntt.co.jp> Hi, Thank you for good suggestions. > The particular reason kombu requirement was updated upstream > was to cater for newer msgpack modules. However the EPEL version > of python-msgpack should not have this issue with python-kombu. > Details in https://pad.lv/1134575 > > So does this cause another issue for you? There is benchmark tool for oslo.messaging introduced at last Atlanta summit. Etherpad: https://etherpad.openstack.org/p/juno-oslo.messaging-amqp-1.0 Code: https://github.com/grs/ombt When I tried this benchmarking tool on my RDO box (kombu 1.1.3-2), I got a poor throughtput (7.7 request/sec). But just updating kombu to 2.5.8, I got a better throughput (91 reuest/sec). I don't know why kombu 1.1.3-2 results such a poor performance. thanks, Koji ---- ### kombu 1.1.3-2 $ ./ombt.py --calls 100 --conf rpc_backend rabbit --conf rabbit_host 192.168.101.33 --conf rabbit_password guest INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 Call 10 of 100 completed Call 20 of 100 completed Call 30 of 100 completed Call 40 of 100 completed Call 50 of 100 completed Call 60 of 100 completed Call 70 of 100 completed Call 80 of 100 completed Call 90 of 100 completed Call 100 of 100 completed {'latency': {'count': 100, 'min': 123.36111068725586, 'max': 258.3320140838623, 'average': 129.59874868392944, '_sum_of_squares': 1699774.5017536662, 'std_deviation': 14.209481239522995, 'total': 12959.874868392944}, 'throughput': 7.7150386788072778, 'calls': 100} ---- ---- ### kombu-2.5.8-1.el6 $ ./ombt.py --calls 100 --conf rpc_backend rabbit --conf rabbit_host 192.168.101.33 --conf rabbit_password guest INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 /usr/lib/python2.6/site-packages/amqp/channel.py:608: DeprecationWarning: auto_delete exchanges has been deprecated 'auto_delete exchanges has been deprecated')) INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 INFO:oslo.messaging._drivers.impl_rabbit:Connected to AMQP server on 192.168.101.33:5672 Call 10 of 100 completed Call 20 of 100 completed Call 30 of 100 completed Call 40 of 100 completed Call 50 of 100 completed Call 60 of 100 completed Call 70 of 100 completed Call 80 of 100 completed Call 90 of 100 completed Call 100 of 100 completed {'latency': {'count': 100, 'min': 6.4032077789306641, 'max': 111.25898361206055, 'average': 10.875468254089355, '_sum_of_squares': 22480.778639419441, 'std_deviation': 10.321432877681714, 'total': 1087.5468254089355}, 'throughput': 91.755911089409807, 'calls': 100} ---- (2014/05/22 22:27), P?draig Brady wrote: > On 05/22/2014 08:37 AM, Koji IIDA wrote: >> Hi, >> >> I'm just installed RDO Icehouse on my CentOS 6.5 box using packstack. >> I found that the version of python-kombu does not satisfy the >> requirement of Icehouse. >> >> >> python-kombu installed by packstack: >> python-kombu.noarch 1.1.3-2.el6 @openstack-icehouse >> >> nova (Icehouse ver) requirement.txt: >> kombu>=2.4.8 (*) >> >> So, I want to update kombu to version 2.4.8 or above. >> >> Any suggestions? > > Note the upstream requirements are often not the same as distro requirements > due to varying version combinations and backports. > > The particular reason kombu requirement was updated upstream > was to cater for newer msgpack modules. However the EPEL version > of python-msgpack should not have this issue with python-kombu. > Details in https://pad.lv/1134575 > > So does this cause another issue for you? > > Note EPEL7 is at v2.5.16 (and rawhide is at v3.0.15) > so in case it's useful to you I've just rebuilt but not tested v2.5.16 > for el6 by doing: > > fedpkg clone python-kombu > git checkout epel7 > fedpkg --dist=el6 scratch-build > > Result is at (for a while): > > http://koji.fedoraproject.org/koji/taskinfo?taskID=6875634 > > thanks, > P?draig. > > From ben42ml at gmail.com Fri May 23 08:34:20 2014 From: ben42ml at gmail.com (Benoit ML) Date: Fri, 23 May 2014 10:34:20 +0200 Subject: [Rdo-list] RDO Foreman Installation error nova controller Message-ID: Hello, I'd like to install an openstack on a multinode using pre-existing work on puppetclass. Si I decided to use RDO like documented here, for icehouse : http://openstack.redhat.com/Deploying_RDO_using_Foreman So i have : 1/ actived some repository : rdo, epel, Centos SCL. 2/ Installed on the foreman node openstack-foreman-node and dhcpd 3/ Follow the documentation to add the host to the group openstack nova controlleur neutron. 4/ Launch puppet agent -tv on the designated host And i have some errors about a puppet class for ceilometer : Error: Could not apply complete catalog: Found 1 dependency cycle: (Ceilometer_config[database/connection] => Class[Ceilometer::Db] => Class[Ceilometer::Api] => Package[ceilometer-api] => Ceilometer_config[database/connection]) Try the '--graph' option and opening the resulting '.dot' file in OmniGraffle or GraphViz I have looking in foreman smart parameters about something to configure but without success .... Can ou hep me please ? Thank you in advance. Regards, -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Fri May 23 14:42:11 2014 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Fri, 23 May 2014 15:42:11 +0100 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse In-Reply-To: <537EC5ED.8030507@lab.ntt.co.jp> References: <537DA91F.9000706@lab.ntt.co.jp> <537DFB3E.6070100@redhat.com> <537EC5ED.8030507@lab.ntt.co.jp> Message-ID: <537F5E43.8060607@redhat.com> On 05/23/2014 04:52 AM, Koji IIDA wrote: > Hi, > > Thank you for good suggestions. > >> The particular reason kombu requirement was updated upstream >> was to cater for newer msgpack modules. However the EPEL version >> of python-msgpack should not have this issue with python-kombu. >> Details in https://pad.lv/1134575 >> >> So does this cause another issue for you? > > There is benchmark tool for oslo.messaging introduced at last Atlanta > summit. > > Etherpad: > https://etherpad.openstack.org/p/juno-oslo.messaging-amqp-1.0 > > Code: > https://github.com/grs/ombt > > > When I tried this benchmarking tool on my RDO box (kombu 1.1.3-2), I got > a poor throughtput (7.7 request/sec). But just updating kombu to 2.5.8, > I got a better throughput (91 reuest/sec). > > I don't know why kombu 1.1.3-2 results such a poor performance. There didn't seem to be anything significant related to performance in the ChangeLogs at least What version of python-amqp are you using with that (kombu >= 2.5 uses the python-amqp fork) This is a significant change, so we'll look at updating EPEL or RDO with that for el6. thanks! P?draig. From moreira.belmiro.email.lists at gmail.com Sun May 25 09:19:42 2014 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Sun, 25 May 2014 11:19:42 +0200 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse In-Reply-To: <537F5E43.8060607@redhat.com> References: <537DA91F.9000706@lab.ntt.co.jp> <537DFB3E.6070100@redhat.com> <537EC5ED.8030507@lab.ntt.co.jp> <537F5E43.8060607@redhat.com> Message-ID: Hi, in our setup we are using Rabbitmq clustered with mirrored queues. Is this the recommended kombu version for this configuration? thanks, Belmiro On Fri, May 23, 2014 at 4:42 PM, P?draig Brady wrote: > On 05/23/2014 04:52 AM, Koji IIDA wrote: > > Hi, > > > > Thank you for good suggestions. > > > >> The particular reason kombu requirement was updated upstream > >> was to cater for newer msgpack modules. However the EPEL version > >> of python-msgpack should not have this issue with python-kombu. > >> Details in https://pad.lv/1134575 > >> > >> So does this cause another issue for you? > > > > There is benchmark tool for oslo.messaging introduced at last Atlanta > > summit. > > > > Etherpad: > > https://etherpad.openstack.org/p/juno-oslo.messaging-amqp-1.0 > > > > Code: > > https://github.com/grs/ombt > > > > > > When I tried this benchmarking tool on my RDO box (kombu 1.1.3-2), I got > > a poor throughtput (7.7 request/sec). But just updating kombu to 2.5.8, > > I got a better throughput (91 reuest/sec). > > > > I don't know why kombu 1.1.3-2 results such a poor performance. > > There didn't seem to be anything significant related to performance in the > ChangeLogs at least > What version of python-amqp are you using with that (kombu >= 2.5 uses the > python-amqp fork) > > This is a significant change, so we'll look at updating EPEL or RDO with > that for el6. > > thanks! > P?draig. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Mon May 26 12:13:13 2014 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Mon, 26 May 2014 13:13:13 +0100 Subject: [Rdo-list] python-kombu 1.1.3 is not recommended version of Icehouse In-Reply-To: References: <537DA91F.9000706@lab.ntt.co.jp> <537DFB3E.6070100@redhat.com> <537EC5ED.8030507@lab.ntt.co.jp> <537F5E43.8060607@redhat.com> Message-ID: <53832FD9.2010107@redhat.com> On 05/25/2014 10:19 AM, Belmiro Moreira wrote: > Hi, > in our setup we are using Rabbitmq clustered with mirrored queues. > Is this the recommended kombu version for this configuration? I presume by "this" you mean v2.5.16 I suggested syncing up between el6 and el7. There doesn't seem to be any specific mention of "clustering", "HA" or "mirrored" in the kombu or amqp changelogs, though I did notice this issue that was worked around in oslo.messaging: https://review.openstack.org/#/c/76686/ Possibly related to that is improved support for heartbeat in python-amqp-1.4.0 Anyway the OpenStack upstream requirements for both Havana and Icehouse are: $:requirements (stable/icehouse)$ grep -E "(amqp|kombu)" global-requirements.txt amqplib>=0.6.1 kombu>=2.4.8 So bumping to 2.5.16 shouldn't negatively impact anything I think. thanks, P?draig. From sgordon at redhat.com Mon May 26 15:04:25 2014 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 26 May 2014 11:04:25 -0400 (EDT) Subject: [Rdo-list] python-*client packaging In-Reply-To: <1341742459.19047176.1401116212740.JavaMail.zimbra@redhat.com> Message-ID: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> Hi all, I was answering a question on ask.o.o [1] over the weekend that caused me to ponder the way we're packaging the python-*clients in RDO. As the clients don't really follow the formal integrated release cycle no release tag was created at the point of the Icehouse release for python-novaclient and instead the most recent tag is 2.17.0 created around 3 months ago. This is what we package and means we're missing functionality that was merged between this tag being created and the Icehouse GA, most notably *all* of the server group commands - the API for which was a fairly important (but late - via a feature freeze exception) addition to Icehouse for some users. I am wondering whether, given tag creation is basically on the whim of the individual maintainer upstream, we should be rebasing the clients from master more regularly instead of relying on the tags? The bug I filed for this specific issue with python-novaclient is https://bugzilla.redhat.com/show_bug.cgi?id=1101014 but I imagine we experience similar issues with the other clients from time to time. Thanks, -- Steve Gordon, RHCE Product Manager, Red Hat Enterprise Linux OpenStack Platform Red Hat Canada (Toronto, Ontario) [1] https://ask.openstack.org/en/question/30433/why-are-nova-server-group-apis-missing-in-rdo-icehouse-installation/?answer=30492#post-id-30492 From jruzicka at redhat.com Mon May 26 16:24:26 2014 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Mon, 26 May 2014 18:24:26 +0200 Subject: [Rdo-list] python-*client packaging In-Reply-To: <53835DFF.9000505@redhat.com> References: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> <53835DFF.9000505@redhat.com> Message-ID: <53836ABA.3050209@redhat.com> On 26.5.2014 17:30, P?draig Brady wrote: > -------- Original Message -------- > Subject: python-*client packaging > Date: Mon, 26 May 2014 11:04:25 -0400 (EDT) > From: Steve Gordon > To: rdo-list at redhat.com > CC: Padraig Brady , Russell Bryant > > Hi all, > > I was answering a question on ask.o.o [1] over the weekend that caused me to ponder the way we're packaging the python-*clients in RDO. As the clients don't really follow the formal integrated release cycle no release tag was created at the point of the Icehouse release for python-novaclient and instead the most recent tag is 2.17.0 created around 3 months ago. I wrote basic overview on RDO wiki: http://openstack.redhat.com/Clients Rebases to latest version are required quite often. > This is what we package and means we're missing functionality that was merged between this tag being created and the Icehouse GA, most notably *all* of the server group commands - the API for which was a fairly important (but late - via a feature freeze exception) addition to Icehouse for some users. I am wondering whether, given tag creation is basically on the whim of the individual maintainer upstream, we should be rebasing the clients from master more regularly instead of relying on the tags? Important patches are backported on demand. I'm not strictly against including upstream patches in packages and in fact, it was done like that in the past. I stopped including upstream patches because I found it quite confusing - version says 0.6.0 but there are SOME bugs/features from 0.7.0... So I'm rather working with assumption that *client devs know best when to release a new version. > The bug I filed for this specific issue with python-novaclient is https://bugzilla.redhat.com/show_bug.cgi?id=1101014 but I imagine we experience similar issues with the other clients from time to time. That's a perfectly valid reason for a selective backport but as you mentioned in the bug, it would be best to release new version which includes this and rebase to it in order to stay somehow consistent with the rest of the world. So, Russel, do you plan to release new novaclient anytime soon or shall I backport? Cheers Jakub From kchamart at redhat.com Tue May 27 03:23:27 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 27 May 2014 08:53:27 +0530 Subject: [Rdo-list] RDO Foreman Installation error nova controller In-Reply-To: References: Message-ID: <20140527032327.GA3762@tesla.redhat.com> On Fri, May 23, 2014 at 10:34:20AM +0200, Benoit ML wrote: > Hello, > > I'd like to install an openstack on a multinode using pre-existing work on > puppetclass. Si I decided to use RDO like documented here, for icehouse : > http://openstack.redhat.com/Deploying_RDO_using_Foreman > > > So i have : > 1/ actived some repository : rdo, epel, Centos SCL. > 2/ Installed on the foreman node openstack-foreman-node and dhcpd > 3/ Follow the documentation to add the host to the group openstack nova > controlleur neutron. > 4/ Launch puppet agent -tv on the designated host > > And i have some errors about a puppet class for ceilometer : > > Error: Could not apply complete catalog: Found 1 dependency cycle: > (Ceilometer_config[database/connection] => Class[Ceilometer::Db] => > Class[Ceilometer::Api] => Package[ceilometer-api] => > Ceilometer_config[database/connection]) Seems like you're hitting this -- https://bugzilla.redhat.com/show_bug.cgi?id=1092073 Maybe someone on this list who has Ceilometer setups may know of a workaround. > Try the '--graph' option and opening the resulting '.dot' file in > OmniGraffle or GraphViz > > I have looking in foreman smart parameters about something to configure > but without success .... > > Can ou hep me please ? > > Thank you in advance. > > Regards, > > -- > -- > Benoit > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -- /kashyap From rbowen at redhat.com Tue May 27 14:32:14 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 27 May 2014 10:32:14 -0400 Subject: [Rdo-list] OpenStack Egypt, June 21 Message-ID: <5384A1EE.9070007@redhat.com> The Cairo OpenStack user group is planning an event on June 21, and want an RDO presence there if possible, both as speakers and attendees. If you are anywhere in the area and could possibly attend, reach out to 'Egyptian' on the #rdo channel on Freenode IRC, and let him know that you're available. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From lars at redhat.com Tue May 27 20:04:24 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 27 May 2014 16:04:24 -0400 Subject: [Rdo-list] Video: RDO Icehouse: Configuring the external bridge Message-ID: <20140527200424.GD28903@redhat.com> Hello all, I put together a short video in which I walk through the process of configuring an external bridge for OpenStack on a single-interface system. This is in general the process described here... http://openstack.redhat.com/Neutron_with_existing_external_network ...but this shows how to use DHCP on the bridge, including cloning the MAC address from eth0. Cheers, -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ben42ml at gmail.com Wed May 28 08:06:06 2014 From: ben42ml at gmail.com (Benoit ML) Date: Wed, 28 May 2014 10:06:06 +0200 Subject: [Rdo-list] RDO Foreman Installation error nova controller In-Reply-To: <20140527032327.GA3762@tesla.redhat.com> References: <20140527032327.GA3762@tesla.redhat.com> Message-ID: Hello, Finally I use github packstack to do a first setup/deploy of icehouse, plus some handmade modification. Thank you. Regards, 2014-05-27 5:23 GMT+02:00 Kashyap Chamarthy : > On Fri, May 23, 2014 at 10:34:20AM +0200, Benoit ML wrote: > > Hello, > > > > I'd like to install an openstack on a multinode using pre-existing work > on > > puppetclass. Si I decided to use RDO like documented here, for icehouse : > > http://openstack.redhat.com/Deploying_RDO_using_Foreman > > > > > > So i have : > > 1/ actived some repository : rdo, epel, Centos SCL. > > 2/ Installed on the foreman node openstack-foreman-node and dhcpd > > 3/ Follow the documentation to add the host to the group openstack nova > > controlleur neutron. > > 4/ Launch puppet agent -tv on the designated host > > > > And i have some errors about a puppet class for ceilometer : > > > > Error: Could not apply complete catalog: Found 1 dependency cycle: > > (Ceilometer_config[database/connection] => Class[Ceilometer::Db] => > > Class[Ceilometer::Api] => Package[ceilometer-api] => > > Ceilometer_config[database/connection]) > > Seems like you're hitting this -- > https://bugzilla.redhat.com/show_bug.cgi?id=1092073 > > Maybe someone on this list who has Ceilometer setups may know of a > workaround. > > > Try the '--graph' option and opening the resulting '.dot' file in > > OmniGraffle or GraphViz > > > > I have looking in foreman smart parameters about something to configure > > but without success .... > > > > Can ou hep me please ? > > > > Thank you in advance. > > > > Regards, > > > > -- > > -- > > Benoit > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- > /kashyap > -- -- Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed May 28 11:09:23 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 28 May 2014 13:09:23 +0200 Subject: [Rdo-list] Video: RDO Icehouse: Configuring the external bridge In-Reply-To: <20140527200424.GD28903@redhat.com> References: <20140527200424.GD28903@redhat.com> Message-ID: <5385C3E3.8020208@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 27/05/14 22:04, Lars Kellogg-Stedman wrote: > Hello all, > > I put together a short video in which I walk through the process > of configuring an external bridge for OpenStack on a > single-interface system. This is in general the process described > here... I don't see the video. Where is it? > > http://openstack.redhat.com/Neutron_with_existing_external_network > > ...but this shows how to use DHCP on the bridge, including cloning > the MAC address from eth0. > > Cheers, > > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBCgAGBQJThcPjAAoJEC5aWaUY1u57mjQH/RsCNxi1mPIUNgqR5/7mLZos xAikJNeH5xdfMdKDwQ/Q1qa5V7frLkpRtX7BxxhAl7oXK9Xh38lxTEAT35LhFZll KnV6JC5XfGSpGwBlrysFMduzdg5UJ9NKy7324VK5ll01KIq48T43xcPqVn2WL2SL XOtvmgyAk+sQ//siG1Stx/OQlOgw3nC3/b29sKgk3LbobplmQOQkWDYeEHjtKgCL IkSBeWAZ49HvipQOKh8qFqYRp7ABlFnYQ1bc6CnBzu/Q95u3biiK+Oddqy3dS6pf Lh4L9iwZaEYWN2zAFMVTn7QVKdKV6kpFIqcjWfC9+t+SZdetThNYSWqiSSZPzbI= =dsiK -----END PGP SIGNATURE----- From kchamart at redhat.com Wed May 28 12:25:19 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 28 May 2014 17:55:19 +0530 Subject: [Rdo-list] Video: RDO Icehouse: Configuring the external bridge In-Reply-To: <5385C3E3.8020208@redhat.com> References: <20140527200424.GD28903@redhat.com> <5385C3E3.8020208@redhat.com> Message-ID: <20140528122519.GC29853@tesla.pnq.redhat.com> On Wed, May 28, 2014 at 01:09:23PM +0200, Ihar Hrachyshka wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 27/05/14 22:04, Lars Kellogg-Stedman wrote: > > Hello all, > > > > I put together a short video in which I walk through the process > > of configuring an external bridge for OpenStack on a > > single-interface system. This is in general the process described > > here... > > I don't see the video. Where is it? I think he inadvertantly missed to post the URL -- https://www.youtube.com/watch?v=8zFQG5mKwPk > > > > > http://openstack.redhat.com/Neutron_with_existing_external_network > > > > ...but this shows how to use DHCP on the bridge, including cloning > > the MAC address from eth0. > > > > Cheers, > > > -- /kashyap From lars at redhat.com Wed May 28 12:56:34 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 28 May 2014 08:56:34 -0400 Subject: [Rdo-list] Video: RDO Icehouse: Configuring the external bridge In-Reply-To: <20140528122519.GC29853@tesla.pnq.redhat.com> References: <20140527200424.GD28903@redhat.com> <5385C3E3.8020208@redhat.com> <20140528122519.GC29853@tesla.pnq.redhat.com> Message-ID: <20140528125634.GA7023@redhat.com> On Wed, May 28, 2014 at 05:55:19PM +0530, Kashyap Chamarthy wrote: > > I don't see the video. Where is it? > > I think he inadvertantly missed to post the URL -- > https://www.youtube.com/watch?v=8zFQG5mKwPk Ah, yes. Always there is one more thing... :) Thanks, Kashyap! -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From kchamart at redhat.com Thu May 29 05:20:24 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 29 May 2014 10:50:24 +0530 Subject: [Rdo-list] FYI - Fedora.next calls for early testing (and some details on how) Message-ID: <20140529052024.GG29853@tesla.pnq.redhat.com> For those of you that live on the bleeding-edge testing RDO on Fedora, Fedora project is going through a lot of changes (dubbed as 'Fedora.next') in its upcoming release cycle. And, the project is looking for early testers/tinkerers to make the release a success. If you have spare cycles, please spend some time testing (heads-up: it'll be very disruptive) Fedora Rawhide (that'll be 21) for your regular work-flows w/ RDO, etc. You can trivially test[1] Fedora's latest bits (Rawhide) in a virtual machine, like that (below is a way I prefer): Create a Fedora 20-VM with a 40GB disk image, update, install Rawhide packages: $ virt-builder fedora-20 -o rawhide.qcow2 --format qcow2 \ --update --selinux-relabel --size 40G\ --install "fedora-release-rawhide yum-utils" Import the disk image into libvirt: $ virt-install --name rawhide --ram 4096 --disk \ path=/home/kashyapc/rawhide.qcow2,format=qcow2,cache=none \ --import Login via serial console into the guest, upgrade to Rawhide: $ yum-config-manager --disable fedora updates updates-testing $ yum-config-manager --enable rawhide $ yum update yum $ yum --releasever=rawhide distro-sync --nogpgcheck $ reboot Optionally, you can take a snapshot so you can revert to a known sane state: $ virsh snapshot-create-as rawhide snap1 \ "Clean Rawhide" [1] https://fedoraproject.org/wiki/Releases/Rawhide#Using_Rawhide -- /kashyap From mmosesohn at mirantis.com Thu May 29 15:20:21 2014 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Thu, 29 May 2014 19:20:21 +0400 Subject: [Rdo-list] Minimal erlang for RabbitMQ Message-ID: Hi RDO people! I remember having a conversation with a few folks at RH during OpenStack Summit about plans to break up erlang into smaller packages to reduce the package size and # of dependencies required to install RabbitMQ. I can't find anything on koji.fedoraproject.org or on the RDO repositories. Does anyone have any more information about this effort? Best Regards, Matthew Mosesohn -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohara at redhat.com Thu May 29 15:44:12 2014 From: rohara at redhat.com (Ryan O'Hara) Date: Thu, 29 May 2014 10:44:12 -0500 Subject: [Rdo-list] Minimal erlang for RabbitMQ In-Reply-To: References: Message-ID: <20140529154412.GF10183@redhat.com> On Thu, May 29, 2014 at 07:20:21PM +0400, Matthew Mosesohn wrote: > Hi RDO people! > > I remember having a conversation with a few folks at RH during OpenStack > Summit about plans to break up erlang into smaller packages to reduce the > package size and # of dependencies required to install RabbitMQ. I can't > find anything on koji.fedoraproject.org or on the RDO repositories. Does > anyone have any more information about this effort? My understanding is that this is planned for Fedora 21. https://fedoraproject.org/wiki/Changes/BetterErlangSupport Ryan From mmosesohn at mirantis.com Thu May 29 15:50:20 2014 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Thu, 29 May 2014 19:50:20 +0400 Subject: [Rdo-list] Minimal erlang for RabbitMQ In-Reply-To: <20140529154412.GF10183@redhat.com> References: <20140529154412.GF10183@redhat.com> Message-ID: That's great news! I'll follow this closely and see if I can contribute to this effort. On Thu, May 29, 2014 at 7:44 PM, Ryan O'Hara wrote: > On Thu, May 29, 2014 at 07:20:21PM +0400, Matthew Mosesohn wrote: > > Hi RDO people! > > > > I remember having a conversation with a few folks at RH during OpenStack > > Summit about plans to break up erlang into smaller packages to reduce the > > package size and # of dependencies required to install RabbitMQ. I can't > > find anything on koji.fedoraproject.org or on the RDO repositories. Does > > anyone have any more information about this effort? > > My understanding is that this is planned for Fedora 21. > > https://fedoraproject.org/wiki/Changes/BetterErlangSupport > > Ryan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmosesohn at mirantis.com Thu May 29 15:59:44 2014 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Thu, 29 May 2014 19:59:44 +0400 Subject: [Rdo-list] Minimal erlang for RabbitMQ In-Reply-To: References: <20140529154412.GF10183@redhat.com> Message-ID: Here's the relevant bug about RabbitMQ in particular: https://bugzilla.redhat.com/show_bug.cgi?id=1083637 Looks like it can be fixed without waiting on the erlang redux, and could be reachable much sooner than Fedora 21 (October). On Thu, May 29, 2014 at 7:50 PM, Matthew Mosesohn wrote: > That's great news! I'll follow this closely and see if I can contribute to > this effort. > > > On Thu, May 29, 2014 at 7:44 PM, Ryan O'Hara wrote: > >> On Thu, May 29, 2014 at 07:20:21PM +0400, Matthew Mosesohn wrote: >> > Hi RDO people! >> > >> > I remember having a conversation with a few folks at RH during OpenStack >> > Summit about plans to break up erlang into smaller packages to reduce >> the >> > package size and # of dependencies required to install RabbitMQ. I can't >> > find anything on koji.fedoraproject.org or on the RDO repositories. >> Does >> > anyone have any more information about this effort? >> >> My understanding is that this is planned for Fedora 21. >> >> https://fedoraproject.org/wiki/Changes/BetterErlangSupport >> >> Ryan >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eberg at rubensteintech.com Thu May 29 19:31:09 2014 From: eberg at rubensteintech.com (Eric Berg) Date: Thu, 29 May 2014 15:31:09 -0400 Subject: [Rdo-list] Simplest Icehouse Implementation Architecture Message-ID: <53878AFD.4010909@rubensteintech.com> I'm working toward implementation of a small RDO cloud which should be quite minimal. We will be running < 25 VMs with very low utilization, which will not change very often and which will basically just run to be available. I have two hosts with plenty of RAM and disk, and can get others as required. The initial expectation as communicated to me was to just use two hosts that we already have. I'm not sure if that's a realistic architecture, but it seems from my reading that I might want at least a separate control box if not also have the network box be separate if not on the same as the control host. So, are either of the following architectures sufficient for a development environment? Option 1. - Two hosts to handle the entire cloud Option 2. - Two compute hosts - One control host Thanks as always for your input. Eric -- Eric Berg Sr. Software Engineer Rubenstein Technology Group From lars at redhat.com Thu May 29 19:39:12 2014 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 29 May 2014 15:39:12 -0400 Subject: [Rdo-list] Simplest Icehouse Implementation Architecture In-Reply-To: <53878AFD.4010909@rubensteintech.com> References: <53878AFD.4010909@rubensteintech.com> Message-ID: <20140529193912.GC7137@redhat.com> On Thu, May 29, 2014 at 03:31:09PM -0400, Eric Berg wrote: > So, are either of the following architectures sufficient for a development > environment? Depending on your definition of "development environment", a *single* host may be sufficient. It really depends on how many instances you expect to support, of what size, and what sort of workloads you'll be hosting. Having a seperate "control" node makes for nice logical separation of roles, which I find helpful in diagnosing problems. Having more than one compute node lets you experiment with things like instance migration, etc, which may be useful if you eventually plan to move to a production configuration. -- Lars Kellogg-Stedman | larsks @ irc Cloud Engineering / OpenStack | " " @ twitter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From eberg at rubensteintech.com Thu May 29 20:04:00 2014 From: eberg at rubensteintech.com (Eric Berg) Date: Thu, 29 May 2014 16:04:00 -0400 Subject: [Rdo-list] Simplest Icehouse Implementation Architecture In-Reply-To: <20140529193912.GC7137@redhat.com> References: <53878AFD.4010909@rubensteintech.com> <20140529193912.GC7137@redhat.com> Message-ID: <538792B0.8050500@rubensteintech.com> Thanks as always, Lars. By "development environment", I mean several things: 1) Developers work on these hosts. We're a web shop, and one or more developers will spin up dev web servers on these hosts 2) Ideally, I'd also want to validate our production cloud environment so that when we deploy it in production, we have validated the configuration. For the time being, however, #2 is a nice-to-have and does not at all seem to fit in with the fairly aggressive goal of implementing a new RDO deployment in 1-3 days (way over that already as you might well imagine). So, basically, I want to migrate from the current set of physical hosts on which developers now work to a cloud environment which will host no more than 25 VMs. Since we have two fairly well-endowed hosts targeted for use as compute hosts, would it be realistic to use one as the controller, while still using it as a compute host? On a related note, what happens if I lose the controller box in this two-compute-hosts-one-as-controller-host scenario? I believe that I'm out of business until I can remedy that, and if I wanted to set up the two hosts as both compute hosts as well as putting some kind of HA in place so that control could pass from one to the other of these boxes, would that be possible? Recommended? Must the control host be separate in order to do (live) migrations? Is it a requirement that the control host be separate if I want to deploy 2 compute hosts? And, if I choose the two-host solution, how does the network host (through which my understanding is that all network access to the instances must pass) play into this? Eric On 5/29/14, 3:39 PM, Lars Kellogg-Stedman wrote: > On Thu, May 29, 2014 at 03:31:09PM -0400, Eric Berg wrote: >> So, are either of the following architectures sufficient for a development >> environment? > Depending on your definition of "development environment", a *single* > host may be sufficient. It really depends on how many instances you > expect to support, of what size, and what sort of workloads you'll be > hosting. > > Having a seperate "control" node makes for nice logical separation of > roles, which I find helpful in diagnosing problems. > > Having more than one compute node lets you experiment with things like > instance migration, etc, which may be useful if you eventually plan to > move to a production configuration. > -- Eric Berg Sr. Software Engineer Rubenstein Technology Group 55 Broad Street, 14th Floor New York, NY 10004-2501 (212) 518-6400 (212) 518-6467 fax eberg at rubensteintech.com www.rubensteintech.com From mattdm at fedoraproject.org Thu May 29 20:07:16 2014 From: mattdm at fedoraproject.org (Matthew Miller) Date: Thu, 29 May 2014 16:07:16 -0400 Subject: [Rdo-list] FYI - Fedora.next calls for early testing (and some details on how) In-Reply-To: <20140529052024.GG29853@tesla.pnq.redhat.com> References: <20140529052024.GG29853@tesla.pnq.redhat.com> Message-ID: <20140529200716.GA9106@mattdm.org> On Thu, May 29, 2014 at 10:50:24AM +0530, Kashyap Chamarthy wrote: > If you have spare cycles, please spend some time testing (heads-up: > it'll be very disruptive) Fedora Rawhide (that'll be 21) for your > regular work-flows w/ RDO, etc. Thanks Kashyap -- yes, this testing would be _very_ appreciated. We're working on getting the nightly qcow builds going -- there's a problem in koji that the release engineering people are figuring out. Once we have that fixed, we'll have both a more-traditional Fedora Cloud Base image plus a new Fedora Atomic image (meant for running Docker containers). -- Matthew Miller -- Fedora Project -- "Tepid change for the somewhat better!" From sgordon at redhat.com Thu May 29 22:04:44 2014 From: sgordon at redhat.com (Steve Gordon) Date: Thu, 29 May 2014 18:04:44 -0400 (EDT) Subject: [Rdo-list] python-*client packaging In-Reply-To: <53836ABA.3050209@redhat.com> References: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> <53835DFF.9000505@redhat.com> <53836ABA.3050209@redhat.com> Message-ID: <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Jakub Ruzicka" > To: rdo-list at redhat.com > > On 26.5.2014 17:30, P?draig Brady wrote: > > -------- Original Message -------- > > From: Steve Gordon > > To: rdo-list at redhat.com > > > > Hi all, > > > > I was answering a question on ask.o.o [1] over the weekend that caused me > > to ponder the way we're packaging the python-*clients in RDO. As the > > clients don't really follow the formal integrated release cycle no release > > tag was created at the point of the Icehouse release for python-novaclient > > and instead the most recent tag is 2.17.0 created around 3 months ago. > > I wrote basic overview on RDO wiki: > > http://openstack.redhat.com/Clients > > Rebases to latest version are required quite often. > > > > This is what we package and means we're missing functionality that was > > merged between this tag being created and the Icehouse GA, most notably > > *all* of the server group commands - the API for which was a fairly > > important (but late - via a feature freeze exception) addition to Icehouse > > for some users. I am wondering whether, given tag creation is basically on > > the whim of the individual maintainer upstream, we should be rebasing the > > clients from master more regularly instead of relying on the tags? > > Important patches are backported on demand. I'm not strictly against > including upstream patches in packages and in fact, it was done like > that in the past. > > I stopped including upstream patches because I found it quite confusing > - version says 0.6.0 but there are SOME bugs/features from 0.7.0... So > I'm rather working with assumption that *client devs know best when to > release a new version. > > > > The bug I filed for this specific issue with python-novaclient is > > https://bugzilla.redhat.com/show_bug.cgi?id=1101014 but I imagine we > > experience similar issues with the other clients from time to time. > > That's a perfectly valid reason for a selective backport but as you > mentioned in the bug, it would be best to release new version which > includes this and rebase to it in order to stay somehow consistent with > the rest of the world. > > So, Russel, do you plan to release new novaclient anytime soon or shall > I backport? I guess what I am driving at is that the process of creating a tag in the OpenStack client projects occurs at pretty arbitrary points in time based on the needs of other OpenStack projects that want to set requirements on them rather than anything relating to the needs of downstream distributions such as RDO or RHELOSP. Because no other OpenStack project needed the particular functionality (and fixes) added to python-novaclient in the last three months no new tag was requested nor created. In this case it means we're missing some 84 odd commits made since the 2.17.0 tag was created. Given this what I'm wondering is if there is any reason we shouldn't move to a model where we rebase the python-*client packages to the latest git commit at each milestone (J-1, J-2, J-3, RC, GA), regardless of the existence of a tag, to ensure we are always picking up the latest changes? -- Steve Gordon, RHCE Product Manager, Red Hat Enterprise Linux OpenStack Platform Red Hat Canada (Toronto, Ontario) From dansmith at redhat.com Thu May 29 22:08:00 2014 From: dansmith at redhat.com (Dan Smith) Date: Thu, 29 May 2014 15:08:00 -0700 Subject: [Rdo-list] python-*client packaging In-Reply-To: <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> References: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> <53835DFF.9000505@redhat.com> <53836ABA.3050209@redhat.com> <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> Message-ID: <5387AFC0.40009@redhat.com> > I guess what I am driving at is that the process of creating a tag in > the OpenStack client projects occurs at pretty arbitrary points in > time based on the needs of other OpenStack projects that want to set > requirements on them rather than anything relating to the needs of > downstream distributions such as RDO or RHELOSP. Because no other > OpenStack project needed the particular functionality (and fixes) > added to python-novaclient in the last three months no new tag was > requested nor created. In this case it means we're missing some 84 > odd commits made since the 2.17.0 tag was created. Yup, exactly. We've had features that were cross-project that had client changes required. We'd push the change into nova, then push into novaclient, tag the novaclient, update requirements for the other project, push that change, etc. On the other hand, we also have "huh, we haven't done a client release in a while" moments. For examples like nova events, instance groups, etc, I think it makes plenty of sense to stay current on the client packages. > Given this what I'm wondering is if there is any reason we shouldn't > move to a model where we rebase the python-*client packages to the > latest git commit at each milestone (J-1, J-2, J-3, RC, GA), > regardless of the existence of a tag, to ensure we are always picking > up the latest changes? Assuming proper testing of course, +1 from me. --Dan From kchamart at redhat.com Fri May 30 03:59:45 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 30 May 2014 09:29:45 +0530 Subject: [Rdo-list] FYI - Fedora.next calls for early testing (and some details on how) In-Reply-To: <20140529200716.GA9106@mattdm.org> References: <20140529052024.GG29853@tesla.pnq.redhat.com> <20140529200716.GA9106@mattdm.org> Message-ID: <20140530035945.GI29853@tesla.pnq.redhat.com> On Thu, May 29, 2014 at 04:07:16PM -0400, Matthew Miller wrote: > On Thu, May 29, 2014 at 10:50:24AM +0530, Kashyap Chamarthy wrote: > > If you have spare cycles, please spend some time testing (heads-up: > > it'll be very disruptive) Fedora Rawhide (that'll be 21) for your > > regular work-flows w/ RDO, etc. > > Thanks Kashyap -- yes, this testing would be _very_ appreciated. > > We're working on getting the nightly qcow builds going -- there's a > problem in koji that the release engineering people are figuring out. Hmm, as I type this I see some on-going in IRC to related to Koji image building. Maybe that'll be resolved soon. > Once we have that fixed, we'll have both a more-traditional Fedora > Cloud Base image plus a new Fedora Atomic image (meant for running > Docker containers). Nice. Maybe you can notify a URL here once you know the builds are going. Thanks. -- /kashyap From kchamart at redhat.com Fri May 30 04:23:30 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 30 May 2014 09:53:30 +0530 Subject: [Rdo-list] FYI - Fedora.next calls for early testing (and some details on how) In-Reply-To: <20140530035945.GI29853@tesla.pnq.redhat.com> References: <20140529052024.GG29853@tesla.pnq.redhat.com> <20140529200716.GA9106@mattdm.org> <20140530035945.GI29853@tesla.pnq.redhat.com> Message-ID: <20140530042330.GJ29853@tesla.pnq.redhat.com> On Fri, May 30, 2014 at 09:29:45AM +0530, Kashyap Chamarthy wrote: > On Thu, May 29, 2014 at 04:07:16PM -0400, Matthew Miller wrote: > > On Thu, May 29, 2014 at 10:50:24AM +0530, Kashyap Chamarthy wrote: > > > If you have spare cycles, please spend some time testing (heads-up: > > > it'll be very disruptive) Fedora Rawhide (that'll be 21) for your > > > regular work-flows w/ RDO, etc. > > > > Thanks Kashyap -- yes, this testing would be _very_ appreciated. > > > > We're working on getting the nightly qcow builds going -- there's a > > problem in koji that the release engineering people are figuring out. > > Hmm, as I type this I see some on-going in IRC to related to Koji image > building. Maybe that'll be resolved soon. > > > Once we have that fixed, we'll have both a more-traditional Fedora > > Cloud Base image plus a new Fedora Atomic image (meant for running > > Docker containers). > > Nice. Maybe you can notify a URL here once you know the builds are > going. Nightly Rawhide cloud image builds are here if folks want to test them. http://koji.fedoraproject.org/koji/taskinfo?taskID=6909806 (Thanks Dennis Gilmore, Fedora Rel Eng) -- /kashyap From kchamart at redhat.com Fri May 30 04:35:06 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 30 May 2014 10:05:06 +0530 Subject: [Rdo-list] FYI - Fedora.next calls for early testing (and some details on how) In-Reply-To: <20140530042330.GJ29853@tesla.pnq.redhat.com> References: <20140529052024.GG29853@tesla.pnq.redhat.com> <20140529200716.GA9106@mattdm.org> <20140530035945.GI29853@tesla.pnq.redhat.com> <20140530042330.GJ29853@tesla.pnq.redhat.com> Message-ID: <20140530043506.GK29853@tesla.pnq.redhat.com> On Fri, May 30, 2014 at 09:53:30AM +0530, Kashyap Chamarthy wrote: [. . .] > Nightly Rawhide cloud image builds are here if folks want to test them. > > http://koji.fedoraproject.org/koji/taskinfo?taskID=6909806 The above URL changes every day. Here's the high level build task, that'll track all images as they're built http://koji.fedoraproject.org/koji/tasks?state=all&view=tree&method=image&order=-id -- /kashyap From shardy at redhat.com Fri May 30 09:06:49 2014 From: shardy at redhat.com (Steven Hardy) Date: Fri, 30 May 2014 10:06:49 +0100 Subject: [Rdo-list] python-*client packaging In-Reply-To: <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> References: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> <53835DFF.9000505@redhat.com> <53836ABA.3050209@redhat.com> <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> Message-ID: <20140530090648.GA5897@t430slt.redhat.com> On Thu, May 29, 2014 at 06:04:44PM -0400, Steve Gordon wrote: > I guess what I am driving at is that the process of creating a tag in the OpenStack client projects occurs at pretty arbitrary points in time based on the needs of other OpenStack projects that want to set requirements on them rather than anything relating to the needs of downstream distributions such as RDO or RHELOSP. Because no other OpenStack project needed the particular functionality (and fixes) added to python-novaclient in the last three months no new tag was requested nor created. In this case it means we're missing some 84 odd commits made since the 2.17.0 tag was created. > > Given this what I'm wondering is if there is any reason we shouldn't move to a model where we rebase the python-*client packages to the latest git commit at each milestone (J-1, J-2, J-3, RC, GA), regardless of the existence of a tag, to ensure we are always picking up the latest changes? The main reason not to do this IMO is that all of the upstream CI testing is done against the latest released version from pypi, not the latest client code from trunk. My experience working on Heat (which uses pretty much all of the python-*clients) is that regressions can and quite frequently do happen when a new client version is tagged, which implies you're taking a significant risk by taking a random bunch of git snapshots instead of a release which has been verified by hundreds/thousands of CI runs upstream. Here's one example which happened recently and is still not resolved: https://bugs.launchpad.net/python-novaclient/+bug/1322183 Currently if you install latest novaclient from trunk, several heat unit tests break, whereas the latest release works fine. Part of the value of working with upstream devs to request a new release is tagged, is they will hopefully have better visibility of potential blocker issues, so the tag should only be applied when there are no known major bugs and fixes/features are fully merged (as opposed to cutting a package which contains a paritally implemented feature or bugfix, e.g a subset of a series of patches have been merged and some are still under review). I'd suggest the solution to this problem is just communication with the upstream core devs and PTL's - IME most will be quite responsive if you ask for a new client release for a valid reason. Steve From jruzicka at redhat.com Fri May 30 12:11:22 2014 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Fri, 30 May 2014 14:11:22 +0200 Subject: [Rdo-list] python-*client packaging In-Reply-To: <20140530090648.GA5897@t430slt.redhat.com> References: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> <53835DFF.9000505@redhat.com> <53836ABA.3050209@redhat.com> <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> <20140530090648.GA5897@t430slt.redhat.com> Message-ID: <5388756A.3050307@redhat.com> On 30.5.2014 11:06, Steven Hardy wrote: > On Thu, May 29, 2014 at 06:04:44PM -0400, Steve Gordon wrote: >> I guess what I am driving at is that the process of creating a tag in the OpenStack client projects occurs at pretty arbitrary points in time based on the needs of other OpenStack projects that want to set requirements on them rather than anything relating to the needs of downstream distributions such as RDO or RHELOSP. Because no other OpenStack project needed the particular functionality (and fixes) added to python-novaclient in the last three months no new tag was requested nor created. In this case it means we're missing some 84 odd commits made since the 2.17.0 tag was created. >> >> Given this what I'm wondering is if there is any reason we shouldn't move to a model where we rebase the python-*client packages to the latest git commit at each milestone (J-1, J-2, J-3, RC, GA), regardless of the existence of a tag, to ensure we are always picking up the latest changes? > > The main reason not to do this IMO is that all of the upstream CI testing > is done against the latest released version from pypi, not the latest > client code from trunk. +1 > My experience working on Heat (which uses pretty much all of the > python-*clients) is that regressions can and quite frequently do happen > when a new client version is tagged, which implies you're taking a > significant risk by taking a random bunch of git snapshots instead of a > release which has been verified by hundreds/thousands of CI runs upstream. Not only CI but also folks who actually use them. Other distros package tagged versions as well. IMHO using tagged versions is much safer than random commit with random new bugs. It's not unusual for clients to release bugfix release very shortly after new release due to bugs discovered as people start using it. If we chase master, I'd expect *much* more breakages. > Here's one example which happened recently and is still not resolved: > > https://bugs.launchpad.net/python-novaclient/+bug/1322183 > > Currently if you install latest novaclient from trunk, several heat unit > tests break, whereas the latest release works fine. > > Part of the value of working with upstream devs to request a new release is > tagged, is they will hopefully have better visibility of potential blocker > issues, so the tag should only be applied when there are no known major > bugs and fixes/features are fully merged (as opposed to cutting a package > which contains a paritally implemented feature or bugfix, e.g a subset of a > series of patches have been merged and some are still under review). > > I'd suggest the solution to this problem is just communication with the > upstream core devs and PTL's - IME most will be quite responsive if you > ask for a new client release for a valid reason. Well said, I fully agree with this. If there is something missing in the latest tagged build, then poke dev/PTL to release new version. If that's not possible for some reason, I can easily do a backport - it's a matter of opening a bug. Cheers Jakub From dallan at redhat.com Fri May 30 13:54:17 2014 From: dallan at redhat.com (Dave Allan) Date: Fri, 30 May 2014 09:54:17 -0400 Subject: [Rdo-list] python-*client packaging In-Reply-To: <20140530090648.GA5897@t430slt.redhat.com> References: <988705872.19052788.1401116665554.JavaMail.zimbra@redhat.com> <53835DFF.9000505@redhat.com> <53836ABA.3050209@redhat.com> <1710698042.22367004.1401401084367.JavaMail.zimbra@redhat.com> <20140530090648.GA5897@t430slt.redhat.com> Message-ID: <20140530135417.GO15278@redhat.com> On Fri, May 30, 2014 at 10:06:49AM +0100, Steven Hardy wrote: > On Thu, May 29, 2014 at 06:04:44PM -0400, Steve Gordon wrote: > > I guess what I am driving at is that the process of creating a tag in the OpenStack client projects occurs at pretty arbitrary points in time based on the needs of other OpenStack projects that want to set requirements on them rather than anything relating to the needs of downstream distributions such as RDO or RHELOSP. Because no other OpenStack project needed the particular functionality (and fixes) added to python-novaclient in the last three months no new tag was requested nor created. In this case it means we're missing some 84 odd commits made since the 2.17.0 tag was created. > > > > Given this what I'm wondering is if there is any reason we shouldn't move to a model where we rebase the python-*client packages to the latest git commit at each milestone (J-1, J-2, J-3, RC, GA), regardless of the existence of a tag, to ensure we are always picking up the latest changes? > > The main reason not to do this IMO is that all of the upstream CI testing > is done against the latest released version from pypi, not the latest > client code from trunk. > > My experience working on Heat (which uses pretty much all of the > python-*clients) is that regressions can and quite frequently do happen > when a new client version is tagged, which implies you're taking a > significant risk by taking a random bunch of git snapshots instead of a > release which has been verified by hundreds/thousands of CI runs upstream. This is the concern I voiced on the Nova call yesterday. If the tagged versions are being used in CI, then I feel quite strongly that we take advantage of that testing and use the tagged versions, not the current git head at the time we want to package. Dave > Here's one example which happened recently and is still not resolved: > > https://bugs.launchpad.net/python-novaclient/+bug/1322183 > > Currently if you install latest novaclient from trunk, several heat unit > tests break, whereas the latest release works fine. > > Part of the value of working with upstream devs to request a new release is > tagged, is they will hopefully have better visibility of potential blocker > issues, so the tag should only be applied when there are no known major > bugs and fixes/features are fully merged (as opposed to cutting a package > which contains a paritally implemented feature or bugfix, e.g a subset of a > series of patches have been merged and some are still under review). > > I'd suggest the solution to this problem is just communication with the > upstream core devs and PTL's - IME most will be quite responsive if you > ask for a new client release for a valid reason. > > Steve From mattdm at fedoraproject.org Fri May 30 13:57:23 2014 From: mattdm at fedoraproject.org (Matthew Miller) Date: Fri, 30 May 2014 09:57:23 -0400 Subject: [Rdo-list] FYI - Fedora.next calls for early testing (and some details on how) In-Reply-To: <20140530043506.GK29853@tesla.pnq.redhat.com> References: <20140529052024.GG29853@tesla.pnq.redhat.com> <20140529200716.GA9106@mattdm.org> <20140530035945.GI29853@tesla.pnq.redhat.com> <20140530042330.GJ29853@tesla.pnq.redhat.com> <20140530043506.GK29853@tesla.pnq.redhat.com> Message-ID: <20140530135723.GA22013@mattdm.org> On Fri, May 30, 2014 at 10:05:06AM +0530, Kashyap Chamarthy wrote: > > http://koji.fedoraproject.org/koji/taskinfo?taskID=6909806 > The above URL changes every day. Here's the high level build task, > that'll track all images as they're built > http://koji.fedoraproject.org/koji/tasks?state=all&view=tree&method=image&order=-id There's also http://alt.fedoraproject.org/pub/alt/nightly-composes/, but it looks like it needs to be updated for the new process. -- Matthew Miller -- Fedora Project -- "Tepid change for the somewhat better!" From eberg at rubensteintech.com Fri May 30 17:31:49 2014 From: eberg at rubensteintech.com (Eric Berg) Date: Fri, 30 May 2014 13:31:49 -0400 Subject: [Rdo-list] Simplest Icehouse Implementation Architecture In-Reply-To: <538792B0.8050500@rubensteintech.com> References: <53878AFD.4010909@rubensteintech.com> <20140529193912.GC7137@redhat.com> <538792B0.8050500@rubensteintech.com> Message-ID: <5388C085.7080805@rubensteintech.com> Thoughts, anyone? I'm moving forward with the following: packstack --install-hosts=192.168.0.37,192.168.0.39 and will add another compute host in the future. Still thinking about what the network should look like, but I'm probably overthinking it for a change. On 5/29/14, 4:04 PM, Eric Berg wrote: > Thanks as always, Lars. > > By "development environment", I mean several things: > > 1) Developers work on these hosts. We're a web shop, and one or more > developers will spin up dev web servers on these hosts > 2) Ideally, I'd also want to validate our production cloud environment > so that when we deploy it in production, we have validated the > configuration. > > For the time being, however, #2 is a nice-to-have and does not at all > seem to fit in with the fairly aggressive goal of implementing a new > RDO deployment in 1-3 days (way over that already as you might well > imagine). > > So, basically, I want to migrate from the current set of physical > hosts on which developers now work to a cloud environment which will > host no more than 25 VMs. > > Since we have two fairly well-endowed hosts targeted for use as > compute hosts, would it be realistic to use one as the controller, > while still using it as a compute host? > > On a related note, what happens if I lose the controller box in this > two-compute-hosts-one-as-controller-host scenario? I believe that I'm > out of business until I can remedy that, and if I wanted to set up the > two hosts as both compute hosts as well as putting some kind of HA in > place so that control could pass from one to the other of these boxes, > would that be possible? Recommended? > > Must the control host be separate in order to do (live) migrations? > > Is it a requirement that the control host be separate if I want to > deploy 2 compute hosts? > > And, if I choose the two-host solution, how does the network host > (through which my understanding is that all network access to the > instances must pass) play into this? > > Eric > > On 5/29/14, 3:39 PM, Lars Kellogg-Stedman wrote: >> On Thu, May 29, 2014 at 03:31:09PM -0400, Eric Berg wrote: >>> So, are either of the following architectures sufficient for a >>> development >>> environment? >> Depending on your definition of "development environment", a *single* >> host may be sufficient. It really depends on how many instances you >> expect to support, of what size, and what sort of workloads you'll be >> hosting. >> >> Having a seperate "control" node makes for nice logical separation of >> roles, which I find helpful in diagnosing problems. >> >> Having more than one compute node lets you experiment with things like >> instance migration, etc, which may be useful if you eventually plan to >> move to a production configuration. >> > -- Eric Berg Sr. Software Engineer Rubenstein Technology Group 55 Broad Street, 14th Floor New York, NY 10004-2501 (212) 518-6400 (212) 518-6467 fax eberg at rubensteintech.com www.rubensteintech.com