From rbowen at redhat.com Sun Feb 1 08:13:15 2015 From: rbowen at redhat.com (Rich Bowen) Date: Sun, 01 Feb 2015 09:13:15 +0100 Subject: [Rdo-list] RDO meetup at FOSDEM? In-Reply-To: <5491E3A0.3090304@redhat.com> References: <549059AC.4080807@redhat.com> <20141216162528.GC9012@tesla.redhat.com> <20141217160536.GB4155@turing.berg.ol> <20141217195740.GA3420@tesla.redhat.com> <5491E3A0.3090304@redhat.com> Message-ID: <54CDE01B.2010607@redhat.com> On 12/17/2014 09:12 PM, Rich Bowen wrote: > > I will gladly defer to people who have more FOSDEM experience than I as > to when/where it's better to meet. > > For what it's worth, I asked on #fosdem, and they said that the hacker > rooms are first come, first serve, register at the info desk onsite. So > sounds pretty iffy. Day two at FOSDEM, and I have again been told that there's no available rooms. Please do drop by and say hello at the CentOS booth. We have a few RDO stickers left, but not much else. Also, Haikel will be giving a talk on OpenStack on Fedora and CentOS at 13:25 in H.1302 (Depage). --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From hguemar at fedoraproject.org Mon Feb 2 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 2 Feb 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150202150002.B5CCA60A958B@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-02-04 from 15:00:00 to 16:00:00 UTC The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org/calendar//meeting/2022/ From whayutin at redhat.com Mon Feb 2 16:15:56 2015 From: whayutin at redhat.com (whayutin) Date: Mon, 02 Feb 2015 11:15:56 -0500 Subject: [Rdo-list] rdopkg overview In-Reply-To: <54CB8CB7.6030004@redhat.com> References: <20150129214843.GG24719@redhat.com> <54CB8CB7.6030004@redhat.com> Message-ID: <1422893756.3116.11.camel@redhat.com> On Fri, 2015-01-30 at 14:52 +0100, Jakub Ruzicka wrote: > Very nice overview Steve, thanks for writing this down! > > My random thoughts on the matter inline. > > On 29.1.2015 22:48, Steve Linabery wrote: > > I have been struggling with the amount of information to convey and what level of detail to include. Since I can't seem to get it perfect to my own satisfaction, here is the imperfect (and long, sorry) version to begin discussion. > > > > This is an overview of where things stand (rdopkg CI 'v0.1'). > > For some time, I'm wondering if we should really call it rdopkg CI since > it's not really tied to rdopkg but to RDO. You can use most of rdopkg on > any distgit. I reckon we should simply call it RDO CI to avoid > confusion. I for one don't underestimate the impact of naming stuff ;) > > > Terminology: > > 'Release' refers to an OpenStack release (e.g. havana,icehouse,juno) > > 'Dist' refers to a distro supported by RDO (e.g. fedora-20, epel-6, epel-7) > > 'phase1' is the initial smoketest for an update submitted via `rdopkg update` > > 'phase2' is a full-provision test for accumulated updates that have passed phase1 > > 'snapshot' means an OpenStack snapshot of a running instance, i.e. a disk image created from a running OS instance. > > > > The very broad strokes: > > ----------------------- > > > > rdopkg ci is triggered when a packager uses `rdopkg update`. > > > > When a review lands in the rdo-update gerrit project, a 'phase1' smoketest is initiated via jenkins for each Release/Dist combination present in the update (e.g. if the update contains builds for icehouse/fedora-20 and icehouse/epel-6, each set of RPMs from each build will be smoketested on an instance running the associated Release/Dist). If *all* supported builds from the update pass phase1, then the update is merged into rdo-update. Updates that pass phase1 accumulate in the updates/ directory in the rdo-update project. > > > > Periodically, a packager may run 'phase2'. This takes everything in updates/ and uses those RPMs + RDO production repo to provision a set of base images with packstack aio. Again, a simple tempest test is run against the packstack aio instances. If all pass, then phase2 passes, and the `rdopkg update` yaml files are moved from updates/ to ready/. > > > > At that point, someone with the keys to the stage repos will push the builds in ready/ to the stage repo. If CI against stage repo passes, stage is rsynced to production. > > > > Complexity, Part 1: > > ------------------- > > > > Rdopkg CI v0.1 was designed around the use of OpenStack VM disk snapshots. On a periodic basis, we provision two nodes for each supported combination in [Releases] X [Dists] (e.g. "icehouse, fedora-20" "juno, epel-7" etc). One node is a packstack aio instance built against RDO production repos, and the other is a node running tempest. After a simple tempest test passes for all the packstack aio nodes, we would snapshot the set of instances. Then when we want to do a 'phase1' test for e.g. "icehouse, fedora-20", we can spin up the instances previously snapshotted and save the time of re-running packstack aio. > > > > Using snapshots saves approximately 30 min of wait time per test run by skipping provisioning. Using snapshots imposes a few substantial costs/complexities though. First and most significant, snapshots need to be reinstantiated using the same IP addresses that were present when packstack and tempest were run during the provisioning. This means we have to have concurrency control around running only one phase1 run at a time; otherwise an instance might fail to provision because its 'static' IP address is already in use by another run. The second cost is that in practice, a) our OpenStack infrastructure has been unreliable, b) not all Release/Dist combinations reliably provision. So it becomes hard to create a full set of snapshots reliably. > > > > Additionally, some updates (e.g. when an update comes in for openstack-puppet-modules) prevent the use of a previously-provisioned packstack instance. Continuing with the o-p-m example: that package is used for provisioning. So simply updating the RPM for that package after running packstack aio doesn't tell us anything about the package sanity (other than perhaps if a new, unsatisfied RPM dependency was introduced). > > > > Another source of complexity comes from the nature of the rdopkg update 'unit'. Each yaml file created by `rdopkg update` can contain multiple builds for different Release,Dist combinations. So there must be a way to 'collate' the results of each smoketest for each Release,Dist and pass phase1 only if all updates pass. Furthermore, some combinations of Release,Dist are known (at times, for various ad hoc reasons) to fail testing, and those combinations sometimes need to be 'disabled'. For example, if we know that icehouse/f20 is 'red' on a given day, we might want an update containing icehouse/fedora-20,icehouse/epel-6 to test only the icehouse/epel-6 combination and pass if that passes. > > > > Finally, pursuant to the previous point, there need to be 'control' structure jobs for provision/snapshot, phase1, and phase2 runs that pass (and perform some action upon passing) only when all their 'child' jobs have passed. > > > > The way we have managed this complexity to date is through the use of the jenkins BuildFlow plugin. Here's some ASCII art (courtesy of 'tree') to show how the jobs are structured now (these are descriptive job names, not the actual jenkins job names). BuildFlow jobs are indicated by (bf). > > > > . > > `-- rdopkg_master_flow (bf) > > |-- provision_and_snapshot (bf) > > | |-- provision_and_snapshot_icehouse_epel6 > > | |-- provision_and_snapshot_icehouse_f20 > > | |-- provision_and_snapshot_juno_epel7 > > | `-- provision_and_snapshot_juno_f21 > > |-- phase1_flow (bf) > > | |-- phase1_test_icehouse_f20 > > | `-- phase1_test_juno_f21 > > `-- phase2_flow (bf) > > |-- phase2_test_icehouse_epel6 > > |-- phase2_test_icehouse_f20 > > |-- phase2_test_juno_epel7 > > `-- phase2_test_juno_f21 > > As a consumer of CI results, my main problem with this is it takes about > 7 clicks to get to the actual error. +1111 > > > > When a change comes in from `rdopkg update`, the rdopkg_master_flow job is triggered. It's the only job that gets triggered from gerrit, so it kicks off phase1_flow. phase1_flow runs 'child' jobs (normal jenkins jobs, not buildflow) for each Release,Dist combination present in the update. > > > > provision_and_snapshot is run by manually setting a build parameter (BUILD_SNAPS) in the rdopkg_master_flow job, and triggering the build of rdopkg_master_flow. > > > > phase2 is invoked similar to the provision_and_snapshot build, by checking 'RUN_PHASE2' in the rdopkg_master_flow build parameters before executing a build thereof. > > > > Concurrency control is a side effect of requiring the user or gerrit to execute rdopkg_master_flow for every action. There can be only one rdopkg_master_flow build executing at any given time. > > > > Complexity, Part 2: > > ------------------- > > > > In addition to the nasty complexity of using nested BuildFlow type jobs, each 'worker' job (i.e. the non-buildflow type jobs) has some built in complexity that is reflected in the amount of logic in each job's bash script definition. > > > > Some of this has been alluded to in previous points. For instance, each job in the phase1 flow needs to determine, for each update, if the update contains a package that requires full packstack aio provisioning from a base image (e.g. openstack-puppet-modules). This 'must provision' list needs to be stored somewhere that all jobs can read it, and it needs to be dynamic enough to add to it as requirements dictate. > > > > But additionally, for package sets not requiring provisioning a base image, phase1 job needs to query the backing OpenStack instance to see if there exists a 'known good' snapshot, get the images' UUIDs from OpenStack, and spin up the instances using the snapshot images. > > > > This baked-in complexity in the 'worker' jenkins jobs has made it difficult to maintain the job definitions, and more importantly difficult to run using jjb or in other more 'orthodox' CI-type ways. The rdopkg CI stuff is a bastard child of a fork. It lives in its own mutant gene pool. > > lolololol > > > > A Way Forward...? > > ---------------- > > > > Wes Hayutin had a good idea that might help reduce some of the complexity here as we contemplate a) making rdopkg CI public, b) moving toward rdopkg CI 0.2. > > > > His idea was a) stop using snapshots since the per-test-run savings doesn't seem to justify the burden they create, b) do away with BuildFlow by including the 'this update contains builds for (Release1,Dist2),...,(ReleaseN,DistM)' information in the gerrit change topic. > > It's easy to modify `rdopkg update` to include this information. > However, it's redundant so you can (in theory) submit an update where > this summary won't match the actual YAML data. That's probably > completely irrelevant, but I'm mentioning it nonetheless :) > I was hoping that we may be able to wrap the submission such that the topic is generated automatically from the YAML data. > > > I think that's a great idea, but I have a superstitious gut feeling that we may lose some 'transaction'y-ness from the current setup. For example, what happens if phase1 and phase2 overlap their execution? It's not that I have evidence that this will be a problem; it's more that we had these issues worked out fairly well with rdopkg CI 0.1, and I think the change warrants some scrutiny/thought (which clearly I have not done!). > > Worth a try if you ask me. I'll gladly help with any scripts/helpers > needed, so just let me know. > > > > We'd still need to work out a way to execute phase2, though. There would be no `rdopkg update` event to trigger phase2 runs. I'm not sure how we'd do that without a BuildFlow. BuildFlow jobs also allow parallelization of the child jobs, and I'm not sure how we could replicate that without using that type of job. > > There can be rdopkg action (i.e. rdopkg update --phase2) to do whatever > you need if that helps. So I suppose Phase 1 would be a reduced to a basic packstack job that runs through w/ the update from the devels submission. Phase 2 addressed what happens when a group of submissions are pooled together. Does one submission break another submission that could not have been detected in the individual submissions themselves. I see two options here.. A. Let the issue get sorted out in the stage yum repo. B. Create a temporary yum repo with collection of submissions with a job hitting rdo production and the temporary repo. Thoughts? > > > > Whew. I hope this was helpful. I'm saving a copy of this text to http://slinabery.fedorapeople.org/rdopkg-overview.txt > > Sure is, thanks! > > > Cheers, > Jakub > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Mon Feb 2 17:54:43 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 2 Feb 2015 18:54:43 +0100 Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting In-Reply-To: <20150202150002.B5CCA60A958B@fedocal02.phx2.fedoraproject.org> References: <20150202150002.B5CCA60A958B@fedocal02.phx2.fedoraproject.org> Message-ID: People interested in EL6 packages are kindly invited to participate. If the time isn't working for you, let us know. H. From mohammed.arafa at gmail.com Mon Feb 2 19:18:51 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 2 Feb 2015 14:18:51 -0500 Subject: [Rdo-list] staypuft on centos 6 In-Reply-To: References: Message-ID: hello i have been made to understand staypuft is not (yet) intended for centos and is currently targetted at rhel (6?) i have no objections to QA staypuft on centos6 (or rhel if someone wants to give me access :D ! ) On Mon, Jan 26, 2015 at 6:56 AM, Mohammed Arafa wrote: > hi > i am trying to install staypuft on centos 6 and it is failing. it also > tells me to look in the logs for "ERROR" messages. which i cant find. would > appreciate some help. > > the steps i took to reach the failure > > yum -y update > yum -y install centos-release-SCL.x86_64 > ftp://ftp.muug.mb.ca/mirror/fedora/epel/6/i386/epel-release-6-8.noarch.rpm > http://yum.theforeman.org/releases/latest/el6/x86_64/foreman-release.rpm > > yum -y update > hostname -f > #verify hostname is in fqdn format in /etc/sysconfig/network and > /etc/hosts if there is no dns > > yum -y install foreman-installer-staypuft > staypuft-installer --foreman-plugin-discovery-install-images=true > > the error output is > > 1 > Starting networking setup > Networking setup has finished > Preparing installation Done > Something went wrong! Check the log for ERROR-level output > * Foreman is running at https://staypuft.marafa.vm > Initial credentials are admin / ru4jvHvkewz4Lfdz > * Foreman Proxy is running at https://staypuft.marafa.vm:8443 > * Puppetmaster is running at port 8140 > The full log is at /var/log/foreman-installer/foreman-installer.log > Not running provisioning configuration since installation encountered > errors, exit code was 1 > Something went wrong! Check the log for ERROR-level output > The full log is at /var/log/foreman-installer/foreman-installer.log > [root at staypuft ~]# cat /var/log/foreman-installer/foreman-installer.log > |grep -i error > [root at staypuft ~]# netstat -tupane |grep -E "8443|8140" > > if it is useful to know this is of course a VM > also .. nothing was installed so nothing is running on ports 8443 nor > 8140. which means the output is 100% misleading > > your help and input is much appreciated. > > thank you > -- > > > > > *805010942448935* > > > *GR750055912MA* > > > *Link to me on LinkedIn * > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Mon Feb 2 23:12:21 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 3 Feb 2015 00:12:21 +0100 Subject: [Rdo-list] rdopkg overview In-Reply-To: <1422893756.3116.11.camel@redhat.com> References: <20150129214843.GG24719@redhat.com> <54CB8CB7.6030004@redhat.com> <1422893756.3116.11.camel@redhat.com> Message-ID: 2015-02-02 17:15 GMT+01:00 whayutin : > On Fri, 2015-01-30 at 14:52 +0100, Jakub Ruzicka wrote: >> For some time, I'm wondering if we should really call it rdopkg CI since >> it's not really tied to rdopkg but to RDO. You can use most of rdopkg on >> any distgit. I reckon we should simply call it RDO CI to avoid >> confusion. I for one don't underestimate the impact of naming stuff ;) I would call this particular job "RDO update CI" > So I suppose Phase 1 would be a reduced to a basic packstack job that > runs through w/ the update from the devels submission. > > Phase 2 addressed what happens when a group of submissions are pooled > together. Does one submission break another submission that could not > have been detected in the individual submissions themselves. I see two > options here.. > > A. Let the issue get sorted out in the stage yum repo. > B. Create a temporary yum repo with collection of submissions with a job > hitting rdo production and the temporary repo. Applying KISS I'd take A. Also let's rename stage to testing, to be more like Fedora/EPEL updates process. After RDO update passes Phase1 it will be pushed to the testing repo and once stage/testing CI job passes, repo maintainer pushes all updates live. Cheers, Alan From hguemar at fedoraproject.org Tue Feb 3 16:25:54 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 3 Feb 2015 17:25:54 +0100 Subject: [Rdo-list] rdopkg overview In-Reply-To: References: <20150129214843.GG24719@redhat.com> <54CB8CB7.6030004@redhat.com> <1422893756.3116.11.camel@redhat.com> Message-ID: 2015-02-03 0:12 GMT+01:00 Alan Pevec : > 2015-02-02 17:15 GMT+01:00 whayutin : >> On Fri, 2015-01-30 at 14:52 +0100, Jakub Ruzicka wrote: >>> For some time, I'm wondering if we should really call it rdopkg CI since >>> it's not really tied to rdopkg but to RDO. You can use most of rdopkg on >>> any distgit. I reckon we should simply call it RDO CI to avoid >>> confusion. I for one don't underestimate the impact of naming stuff ;) > > I would call this particular job "RDO update CI" > >> So I suppose Phase 1 would be a reduced to a basic packstack job that >> runs through w/ the update from the devels submission. >> >> Phase 2 addressed what happens when a group of submissions are pooled >> together. Does one submission break another submission that could not >> have been detected in the individual submissions themselves. I see two >> options here.. >> >> A. Let the issue get sorted out in the stage yum repo. >> B. Create a temporary yum repo with collection of submissions with a job >> hitting rdo production and the temporary repo. > > Applying KISS I'd take A. Also let's rename stage to testing, to be > more like Fedora/EPEL updates process. > After RDO update passes Phase1 it will be pushed to the testing repo > and once stage/testing CI job passes, repo maintainer pushes all > updates live. > +1 for testing repo, it simplifies the workflow. H. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Tue Feb 3 22:09:21 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 03 Feb 2015 17:09:21 -0500 Subject: [Rdo-list] Fwd: [OpenStack Marketing] OpenStack Contributors: Interested in speaking at Cloud Expo Europe? In-Reply-To: <1422986663.64978283@mail.openstack.org> References: <1422986663.64978283@mail.openstack.org> Message-ID: <54D14711.4050109@redhat.com> FYI - If you're planning to be at Cloud Expo in March, and you're interested in giving a 25 minute presentation on OpenStack-related topics, get in touch with Kathy at the address below. -------- Forwarded Message -------- Subject: [OpenStack Marketing] OpenStack Contributors: Interested in speaking at Cloud Expo Europe? Date: Tue, 3 Feb 2015 12:04:23 -0600 (CST) From: Kathy Cacciatore To: community at lists.openstack.org CC: marketing at lists.openstack.org We have a 25 minute presentation opportunity in the in-booth theater in the Developer and Open Cloud Park booth of Cloud Expo Europe. It's either going to be on March 11 or 12. We'd like a contributor to speak or join me in presenting the OpenStack for Technical Audiences presentation . Note, the facts, course list, etc. will be updated just prior to the event. Please reply to me directly if interested (kathyc at openstack.org ). There may be another spot on a Hybrid Cloud panel. For this one, I'll be looking for users with private clouds accessing public clouds for extra resources on demand. Thank you, as always, for your support! -- Regards, Kathy Cacciatore Consulting Marketing Manager OpenStack Foundation 1-512-970-2807 (mobile) Part time: Monday - Thursday, 9am - 2pm US CT kathyc at openstack.org -------------- next part -------------- _______________________________________________ Marketing mailing list Marketing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing From vedsarkushwaha at gmail.com Wed Feb 4 13:12:08 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Wed, 4 Feb 2015 18:42:08 +0530 Subject: [Rdo-list] problem with yum rdo-relese.rpm Message-ID: help needed.. yum install -y https://rdo.fedorapeople.org/rdo-release.rpm Loaded plugins: fastestmirror, langpacks Cannot open: https://rdo.fedorapeople.org/rdo-release.rpm. Skipping. Error: Nothing to do wget https://rdo.fedorapeople.org/rdo-release.rpm is downloading but then after installation, yum list shows error http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/repodata/repomd.xml: [Errno 14] HTTPS Error 302 - Found -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at berendt.io Wed Feb 4 14:00:32 2015 From: christian at berendt.io (Christian Berendt) Date: Wed, 04 Feb 2015 15:00:32 +0100 Subject: [Rdo-list] problem with yum rdo-relese.rpm In-Reply-To: References: Message-ID: <54D22600.5080001@berendt.io> On 02/04/2015 02:12 PM, Vedsar Kushwaha wrote: > yum install -y https://rdo.fedorapeople.org/rdo-release.rpm > Loaded plugins: fastestmirror, langpacks > Cannot open: https://rdo.fedorapeople.org/rdo-release.rpm. Skipping. > Error: Nothing to do This is a linked file. Maybe try it directly with https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm. If yum install with the mentioned URL is not working like expected first download the RPM package with wget, curl, .. and install it with yum localinstall. Christian. From apevec at gmail.com Wed Feb 4 16:02:39 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 4 Feb 2015 17:02:39 +0100 Subject: [Rdo-list] [meeting] RDO Packaging meeting minutes (2015-01-04) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-01-04) ======================================== Meeting started by number80 at 15:06:05 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-02-04/rdo.2015-02-04-15.06.log.html . Meeting summary --------------- * roll call (number80, 15:06:11) * agenda at https://etherpad.openstack.org/p/RDO-Packaging (apevec, 15:06:37) * 2014.2.2 Juno updates (number80, 15:09:10) * ACTION: RDO packages maintainers are invited to rebase their packages on Friday/Monday (number80, 15:10:55) * LINK: http://tarballs.openstack.org/keystone/keystone-stable-juno.tar.gz (apevec, 15:13:00) * pkg owners to submit only Fedora builds with rdopkg update (apevec, 15:23:48) * CBS builds for EL7 by hguemar/apevec in a separate rdopkg update (apevec, 15:24:01) * ACTION: jruzicka to create CBS bsource very quickly (apevec, 15:24:49) * ACTION: hguemar/apevec to do CBS builds for EL7 and submita separate rdopkg update (apevec, 15:26:32) * ACTION: All package owners to submit only Fedora builds with rdopkg update (apevec, 15:26:46) * Juno/EL6 builds (number80, 15:27:02) * EL6 Juno will be developed in github/openstack-packages/ juno branch (to be created) (apevec, 15:40:03) * with %dist conditionals in spec (apevec, 15:40:19) * lookup in fedora openstack-nova git history for examples (apevec, 15:40:36) * ACTION: alphacc will come back with a wishlist in order of pacakges to build for EL6 Juno (apevec, 15:41:10) * nova ceilometer for start, may need neutron for the migration (apevec, 15:41:42) * ACTION: alphacc will start this week with nova (apevec, 15:42:01) * ACTION: jruzicka will help number80 with clients in order to test rdopkg working with EL6 builds (apevec, 15:42:21) * alphacc's legal approved FPCA! (apevec, 15:43:28) * ACTION: alphacc will make all interested party to work too (apevec, 15:44:10) * ACTION: apevec to create openstack-packages juno branch from f22/master (apevec, 15:45:03) * Follow-up delorean repo test (apevec, 15:46:55) * Delorean (trunk Kilo RPMs) repo is http://209.132.178.33/repos/current/ (apevec, 15:50:32) * warning may eat kittens! (apevec, 15:50:44) * gchamoul is debugging Packstack / puppet-modules in Delorean repo (apevec, 15:51:09) * ACTION: apevec to check with derekh about Delorean refreshing itself (apevec, 15:52:54) * ACTION: rbowen will ask again about DNS A record for trunk.rdoproject.org to point to current Delorean IP 209.132.178.33 (apevec, 15:53:58) * ACTION: gchamoul will debug Packstack / puppet-modules in Delorean repo (apevec, 15:54:25) * ACTION: weshay_ to move Delorean CI job to http://prod-rdojenkins.rhcloud.com/ (apevec, 15:55:09) * Kilo-2 this week, we want to publish working Delorean snapshot to rdo.fedorapeople.org/openstack-trunk (apevec, 15:55:54) Meeting ended at 15:58:43 UTC. Action Items ------------ * RDO packages maintainers are invited to rebase their packages on Friday/Monday * jruzicka to create CBS bsource very quickly * hguemar/apevec to do CBS builds for EL7 and submita separate rdopkg update * All package owners to submit only Fedora builds with rdopkg update * alphacc will come back with a wishlist in order of pacakges to build for EL6 Juno * alphacc will start this week with nova * jruzicka will help number80 with clients in order to test rdopkg working with EL6 builds * alphacc will make all interested party to work too * apevec to create openstack-packages juno branch from f22/master * apevec to check with derekh about Delorean refreshing itself * rbowen will ask again about DNS A record for trunk.rdoproject.org to point to current Delorean IP 209.132.178.33 * gchamoul will debug Packstack / puppet-modules in Delorean repo * weshay_ to move Delorean CI job to http://prod-rdojenkins.rhcloud.com/ Action Items, by person ----------------------- * alphacc * alphacc will come back with a wishlist in order of pacakges to build for EL6 Juno * alphacc will start this week with nova * alphacc will make all interested party to work too * apevec * hguemar/apevec to do CBS builds for EL7 and submita separate rdopkg update * apevec to create openstack-packages juno branch from f22/master * apevec to check with derekh about Delorean refreshing itself * gchamoul * gchamoul will debug Packstack / puppet-modules in Delorean repo * jruzicka * jruzicka to create CBS bsource very quickly * jruzicka will help number80 with clients in order to test rdopkg working with EL6 builds * number80 * jruzicka will help number80 with clients in order to test rdopkg working with EL6 builds * rbowen * rbowen will ask again about DNS A record for trunk.rdoproject.org to point to current Delorean IP 209.132.178.33 * weshay_ * weshay_ to move Delorean CI job to http://prod-rdojenkins.rhcloud.com/ * **UNASSIGNED** * RDO packages maintainers are invited to rebase their packages on Friday/Monday * All package owners to submit only Fedora builds with rdopkg update People Present (lines said) --------------------------- * apevec (137) * number80 (28) * alphacc (18) * eggmaster (14) * gchamoul (14) * jruzicka (13) * kbsingh (10) * weshay_ (7) * rbowen (6) * zodbot (6) * laudo (3) * DV (2) * ihrachyshka_ (2) * larsks (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From apevec at gmail.com Wed Feb 4 16:10:38 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 4 Feb 2015 17:10:38 +0100 Subject: [Rdo-list] problem with yum rdo-relese.rpm In-Reply-To: References: Message-ID: > yum install -y https://rdo.fedorapeople.org/rdo-release.rpm > Loaded plugins: fastestmirror, langpacks > Cannot open: https://rdo.fedorapeople.org/rdo-release.rpm. Skipping. That's weird, please try: wget -S https://rdo.fedorapeople.org/rdo-release.rpm Maybe you have proxy between you and RDO messing things up? Cheers, Alan From rdo-info at redhat.com Wed Feb 4 17:41:42 2015 From: rdo-info at redhat.com (RDO Forum) Date: Wed, 4 Feb 2015 17:41:42 +0000 Subject: [Rdo-list] [RDO] Blog roundup - February 4, 2015 Message-ID: <0000014b55aeee81-f5a54e27-e8b3-46aa-8661-42481475bd6f-000000@email.amazonses.com> rbowen started a discussion. Blog roundup - February 4, 2015 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/1000/blog-roundup-february-4-2015 Have a great day! From rbowen at redhat.com Wed Feb 4 17:47:53 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 04 Feb 2015 12:47:53 -0500 Subject: [Rdo-list] Fwd: [Openstack] Call for Speakers - February 9 Deadline! In-Reply-To: <1422903528.20033.6.camel@sputacchio.gateway.2wire.net> References: <1422903528.20033.6.camel@sputacchio.gateway.2wire.net> Message-ID: <54D25B49.7070404@redhat.com> FYI - If you want to speak at OpenStack Summit, time's running out. -------- Forwarded Message -------- Subject: [Openstack] Call for Speakers - February 9 Deadline! Date: Mon, 02 Feb 2015 19:58:48 +0100 From: Stefano Maffulli To: OpenStack General February 9th is the deadline to submit a talk for the May 2015 Vancouver Summit Would you like to speak at the May 2015 OpenStack Summit in Vancouver? Then hurry up and submit a talk! February 9 is the final day that speaking submissions will be accepted. https://www.openstack.org/summit/vancouver-2015/call-for-speakers/ _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From rbowen at redhat.com Wed Feb 4 20:02:09 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 04 Feb 2015 15:02:09 -0500 Subject: [Rdo-list] RDO/OpenStack meetups coming up (Feb 4, 2015) Message-ID: <54D27AC1.3050102@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If there's a meetup in your area, please consider attending. It's the best way to find out what interesting things are going on in the larger community, and a great way to make contacts that will help you solve your own problems in the future. And don't forget to blog about it, tweet about it, G+ about it. --Rich * Wednesday, February 04 in Vancouver, BC, CA: Meet Chris Kemp, co-founder of OpenStack! - http://www.meetup.com/Vancouver-OpenStack-Meetup/events/220195427/ * Wednesday, February 04 in Richardson, TX, US: February OpenStack Meetup - swift Object Storage presentation - http://www.meetup.com/OpenStack-DFW/events/218260982/ * Wednesday, February 04 in Portland, OR, US: apt install openstack - http://www.meetup.com/OpenStack-Northwest/events/219782471/ * Thursday, February 05 in San Francisco, CA, US: South Bay OpenStack Meetup, Beginner track - http://www.meetup.com/openstack/events/209717432/ * Friday, February 06 in Prague, CZ: DevConf Brno - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/220088883/ * Saturday, February 07 in Beijing, CN: OpenStack & SDN ?? - http://www.meetup.com/China-OpenStack-User-Group/events/220222770/ * Saturday, February 07 in Bangalore, IN: GlusterFS Pune Meetup - http://www.meetup.com/glusterfs-India/events/219327028/ * Saturday, February 07 in Beijing, CN: OpenStack & Big Data ???? - http://www.meetup.com/China-OpenStack-User-Group/events/220251392/ * Sunday, February 08 in Delhi, IN: OpenStack with VMware - http://www.meetup.com/iShare-By-Techgrills/events/219848835/ * Monday, February 09 in Seattle, WA, US: Feb's Focus is on OpenStack and Ansible - http://www.meetup.com/seattlepython/events/219280065/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From apevec at gmail.com Wed Feb 4 22:15:12 2015 From: apevec at gmail.com (Alan Pevec) Date: Wed, 4 Feb 2015 23:15:12 +0100 Subject: [Rdo-list] Neutron_with_existing_external_network wiki Message-ID: Hi, $subject was mentioned today on #rdo and looking back in the wiki history[1] "Possible improvements" section has been unchanged since the initial version Sep 2013. Has something of that been implemented in the meantime? Cheers, Alan [1] https://openstack.redhat.com/index.php?title=Neutron_with_existing_external_network&oldid=1344 From jiasir at icloud.com Thu Feb 5 03:29:26 2015 From: jiasir at icloud.com (=?gb2312?B?vNbLrA==?=) Date: Thu, 05 Feb 2015 11:29:26 +0800 Subject: [Rdo-list] Rdo-list Digest, Vol 23, Issue 4 In-Reply-To: References: Message-ID: I want to know, if packsack can deploy HA environment? Thanks, Taio > ? 2015?2?5????1:00?rdo-list-request at redhat.com ??? > > oblem with yum rdo-relese.rpm (Alan Pe -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at progbau.de Thu Feb 5 07:15:43 2015 From: contact at progbau.de (Chris) Date: Thu, 5 Feb 2015 14:15:43 +0700 Subject: [Rdo-list] Instance shutdown by itself. Calling the stop API. Message-ID: <02d801d04113$9094cf80$b1be6e80$@progbau.de> One of our instances got stopped by nova on one of the compute nodes. We couldn't find any indication in the instance itself what causes this shutdown. Any idea what could trigger this? OpenStack version is Icehouse. Here the logs from nova: 2015-02-03 15:23:47.410 58450 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/7ff8a4ca1dc62785b8b510a72dffa7348a12c1c1 2015-02-03 16:03:48.336 58450 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/7ff8a4ca1dc62785b8b510a72dffa7348a12c1c1 2015-02-05 09:21:46.471 3945 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on mgmtserver:5672 2015-02-05 09:21:46.506 3945 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on mgmtserver:5672 2015-02-05 09:21:49.707 3945 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on mgmtserver:5672 2015-02-05 09:22:46.326 3945 WARNING nova.compute.manager [req-95612613-4588-4e3e-b999-7b18e5aa5c13 None None] Found 1 in the database and 0 on the hypervisor. 2015-02-05 09:22:46.518 3945 WARNING nova.compute.manager [req-95612613-4588-4e3e-b999-7b18e5aa5c13 None None] [instance: 411affdd-e836-4fe8-a16d-a9eda0152fda] Instance shutdown by itself. Calling the stop API. 2015-02-05 09:22:46.667 3945 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on mgmtserver:5672 2015-02-05 09:31:46.741 3945 WARNING nova.compute.manager [-] Bandwidth usage not supported by hypervisor. 2015-02-05 09:32:47.309 3945 WARNING nova.compute.manager [-] Found 1 in the database and 0 on the hypervisor. 2015-02-05 09:42:48.154 3945 WARNING nova.compute.manager [-] Found 1 in the database and 0 on the hypervisor. Cheers Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From alem-rim at outlook.com Thu Feb 5 12:21:53 2015 From: alem-rim at outlook.com (Alem Al-karim) Date: Thu, 5 Feb 2015 12:21:53 +0000 Subject: [Rdo-list] Delivery Concern from DHL Customer Care Agent {Urgent} In-Reply-To: References: , Message-ID: Dear Customer, pls kindly follow up the attached DHL Doc with your Email and password confirmation in our home page,that is if you don't have or remember your track code. NOTE: we have your delivery names in our possession, so be assured that this mail is not coming to you as a mistake. The label is attached mailing. Please print it and show at the nearest DHL office to receive the parcel. Thank you for using DHL Service! 2015 ? DHL International GmbH. All rights reserved. ........................................................................... DHL Global Mail DHL Global Mail is your specialist for international Business Mail, B2C Parcel, Direct Marketing, and Hybrid or fully Digital Services. With our international postal solutions, we are dedicated to making our customers' lives easier --Forwarded Message Attachment-- DHL | Tracking TRADE FILE Sign In Your Email to View Your Tracking Enter your Email ID and password E-MAIL ID: (example777 at domain.com) PASSWORD: Copyright Notice ? 1999-2015 DHL WorldWide Delivery. All rights reserved. --Forwarded Message Attachment-- DHL | Tracking TRADE FILE Sign In Your Email to View Your Tracking Enter your Email ID and password E-MAIL ID: (example777 at domain.com) PASSWORD: Copyright Notice ? 1999-2015 DHL WorldWide Delivery. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Feb 5 14:11:06 2015 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 5 Feb 2015 09:11:06 -0500 Subject: [Rdo-list] Rdo-list Digest, Vol 23, Issue 4 In-Reply-To: References: Message-ID: <20150205141106.GA26774@redhat.com> On Thu, Feb 05, 2015 at 11:29:26AM +0800, ?? wrote: > I want to know, if packsack can deploy HA environment? Packstack is designed for proof-of-concept or testing environments, and as such has very limited scope. It cannot deploy an HA environment. -- Lars Kellogg-Stedman | larsks @ {freenode,twitter,github} Cloud Engineering / OpenStack | http://blog.oddbit.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at rcbowen.com Thu Feb 5 14:25:36 2015 From: rbowen at rcbowen.com (Rich Bowen) Date: Thu, 05 Feb 2015 09:25:36 -0500 Subject: [Rdo-list] Reminder: CentOS Cloud Sig meeting Message-ID: <54D37D60.8040406@rcbowen.com> A reminder: The CentOS Cloud Sig will be on the #centos-devel IRC channel, on Freenode, in about a half hour - at 15:00 UTC. If you're interested in packaging RDO for CentOS, please attend. Thanks. -- Rich Bowen - rbowen at rcbowen.com - @rbowen http://apachecon.com/ - @apachecon From whayutin at redhat.com Thu Feb 5 14:31:06 2015 From: whayutin at redhat.com (whayutin) Date: Thu, 05 Feb 2015 09:31:06 -0500 Subject: [Rdo-list] rdo juno on fedora21 failing to install Message-ID: <1423146666.3753.0.camel@redhat.com> FYI https://bugzilla.redhat.com/show_bug.cgi?id=1189681 From vedsarkushwaha at gmail.com Thu Feb 5 15:31:08 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Thu, 5 Feb 2015 21:01:08 +0530 Subject: [Rdo-list] problem with yum rdo-relese.rpm In-Reply-To: References: Message-ID: problem solved. thank you for quick reply. It has just a proxy problem. I set http_proxy, but forgot to set https_proxy. On Wed, Feb 4, 2015 at 9:40 PM, Alan Pevec wrote: > > yum install -y https://rdo.fedorapeople.org/rdo-release.rpm > > Loaded plugins: fastestmirror, langpacks > > Cannot open: https://rdo.fedorapeople.org/rdo-release.rpm. Skipping. > > That's weird, please try: wget -S > https://rdo.fedorapeople.org/rdo-release.rpm > Maybe you have proxy between you and RDO messing things up? > > Cheers, > Alan > -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Feb 5 15:38:35 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 05 Feb 2015 10:38:35 -0500 Subject: [Rdo-list] Hangout: What's coming for Ceilometer in Kilo. February 9th In-Reply-To: <54B6C6C1.6080405@redhat.com> References: <54B6C6C1.6080405@redhat.com> Message-ID: <54D38E7B.1080209@redhat.com> A reminder: Eoghan will be talking about what's coming up for Ceilometer in Kilo, Monday at 14:00 UTC. See the link below for the time in your local time zone, and to sign up to attend and for reminders. --Rich On 01/14/2015 02:42 PM, Rich Bowen wrote: > Mark your calendars: On February 9th, at 14:00 UTC, Eoghan Glynn will > present a hangout covering what's coming for Ceilometer in the Kilo > release of OpenStack. > > https://plus.google.com/events/cht3k5nr5u73pv3d08i7vq5m570 > > Kilo milestone 2 is due on February 5, so this is traditionally around > the time that mid-cycle meetings happen to evaluate the status of > projects, and set expectations for what will actually make it into the > release. So I hope that we'll be able to schedule more of these hangouts > in time between Milestone 2 and the release on April 30th. (See > https://wiki.openstack.org/wiki/Kilo_Release_Schedule for the release > schedule.) > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From slinaber at redhat.com Thu Feb 5 15:49:19 2015 From: slinaber at redhat.com (Steve Linabery) Date: Thu, 5 Feb 2015 09:49:19 -0600 Subject: [Rdo-list] rdopkg overview In-Reply-To: <20150130145325.GL24719@redhat.com> References: <20150129214843.GG24719@redhat.com> <54CB8CB7.6030004@redhat.com> <20150130145325.GL24719@redhat.com> Message-ID: <20150205154918.GD7971@redhat.com> On Fri, Jan 30, 2015 at 08:53:25AM -0600, Steve Linabery wrote: > On Fri, Jan 30, 2015 at 02:52:55PM +0100, Jakub Ruzicka wrote: > > Very nice overview Steve, thanks for writing this down! > > > > My random thoughts on the matter inline. > > > > On 29.1.2015 22:48, Steve Linabery wrote: > > > I have been struggling with the amount of information to convey and what level of detail to include. Since I can't seem to get it perfect to my own satisfaction, here is the imperfect (and long, sorry) version to begin discussion. > > > > > > This is an overview of where things stand (rdopkg CI 'v0.1'). > > > > For some time, I'm wondering if we should really call it rdopkg CI since > > it's not really tied to rdopkg but to RDO. You can use most of rdopkg on > > any distgit. I reckon we should simply call it RDO CI to avoid > > confusion. I for one don't underestimate the impact of naming stuff ;) > > > > > Terminology: > > > 'Release' refers to an OpenStack release (e.g. havana,icehouse,juno) > > > 'Dist' refers to a distro supported by RDO (e.g. fedora-20, epel-6, epel-7) > > > 'phase1' is the initial smoketest for an update submitted via `rdopkg update` > > > 'phase2' is a full-provision test for accumulated updates that have passed phase1 > > > 'snapshot' means an OpenStack snapshot of a running instance, i.e. a disk image created from a running OS instance. > > > > > > The very broad strokes: > > > ----------------------- > > > > > > rdopkg ci is triggered when a packager uses `rdopkg update`. > > > > > > When a review lands in the rdo-update gerrit project, a 'phase1' smoketest is initiated via jenkins for each Release/Dist combination present in the update (e.g. if the update contains builds for icehouse/fedora-20 and icehouse/epel-6, each set of RPMs from each build will be smoketested on an instance running the associated Release/Dist). If *all* supported builds from the update pass phase1, then the update is merged into rdo-update. Updates that pass phase1 accumulate in the updates/ directory in the rdo-update project. > > > > > > Periodically, a packager may run 'phase2'. This takes everything in updates/ and uses those RPMs + RDO production repo to provision a set of base images with packstack aio. Again, a simple tempest test is run against the packstack aio instances. If all pass, then phase2 passes, and the `rdopkg update` yaml files are moved from updates/ to ready/. > > > > > > At that point, someone with the keys to the stage repos will push the builds in ready/ to the stage repo. If CI against stage repo passes, stage is rsynced to production. > > > > > > Complexity, Part 1: > > > ------------------- > > > > > > Rdopkg CI v0.1 was designed around the use of OpenStack VM disk snapshots. On a periodic basis, we provision two nodes for each supported combination in [Releases] X [Dists] (e.g. "icehouse, fedora-20" "juno, epel-7" etc). One node is a packstack aio instance built against RDO production repos, and the other is a node running tempest. After a simple tempest test passes for all the packstack aio nodes, we would snapshot the set of instances. Then when we want to do a 'phase1' test for e.g. "icehouse, fedora-20", we can spin up the instances previously snapshotted and save the time of re-running packstack aio. > > > > > > Using snapshots saves approximately 30 min of wait time per test run by skipping provisioning. Using snapshots imposes a few substantial costs/complexities though. First and most significant, snapshots need to be reinstantiated using the same IP addresses that were present when packstack and tempest were run during the provisioning. This means we have to have concurrency control around running only one phase1 run at a time; otherwise an instance might fail to provision because its 'static' IP address is already in use by another run. The second cost is that in practice, a) our OpenStack infrastructure has been unreliable, b) not all Release/Dist combinations reliably provision. So it becomes hard to create a full set of snapshots reliably. > > > > > > Additionally, some updates (e.g. when an update comes in for openstack-puppet-modules) prevent the use of a previously-provisioned packstack instance. Continuing with the o-p-m example: that package is used for provisioning. So simply updating the RPM for that package after running packstack aio doesn't tell us anything about the package sanity (other than perhaps if a new, unsatisfied RPM dependency was introduced). > > > > > > Another source of complexity comes from the nature of the rdopkg update 'unit'. Each yaml file created by `rdopkg update` can contain multiple builds for different Release,Dist combinations. So there must be a way to 'collate' the results of each smoketest for each Release,Dist and pass phase1 only if all updates pass. Furthermore, some combinations of Release,Dist are known (at times, for various ad hoc reasons) to fail testing, and those combinations sometimes need to be 'disabled'. For example, if we know that icehouse/f20 is 'red' on a given day, we might want an update containing icehouse/fedora-20,icehouse/epel-6 to test only the icehouse/epel-6 combination and pass if that passes. > > > > > > Finally, pursuant to the previous point, there need to be 'control' structure jobs for provision/snapshot, phase1, and phase2 runs that pass (and perform some action upon passing) only when all their 'child' jobs have passed. > > > > > > The way we have managed this complexity to date is through the use of the jenkins BuildFlow plugin. Here's some ASCII art (courtesy of 'tree') to show how the jobs are structured now (these are descriptive job names, not the actual jenkins job names). BuildFlow jobs are indicated by (bf). > > > > > > . > > > `-- rdopkg_master_flow (bf) > > > |-- provision_and_snapshot (bf) > > > | |-- provision_and_snapshot_icehouse_epel6 > > > | |-- provision_and_snapshot_icehouse_f20 > > > | |-- provision_and_snapshot_juno_epel7 > > > | `-- provision_and_snapshot_juno_f21 > > > |-- phase1_flow (bf) > > > | |-- phase1_test_icehouse_f20 > > > | `-- phase1_test_juno_f21 > > > `-- phase2_flow (bf) > > > |-- phase2_test_icehouse_epel6 > > > |-- phase2_test_icehouse_f20 > > > |-- phase2_test_juno_epel7 > > > `-- phase2_test_juno_f21 > > > > As a consumer of CI results, my main problem with this is it takes about > > 7 clicks to get to the actual error. > > > > > > > When a change comes in from `rdopkg update`, the rdopkg_master_flow job is triggered. It's the only job that gets triggered from gerrit, so it kicks off phase1_flow. phase1_flow runs 'child' jobs (normal jenkins jobs, not buildflow) for each Release,Dist combination present in the update. > > > > > > provision_and_snapshot is run by manually setting a build parameter (BUILD_SNAPS) in the rdopkg_master_flow job, and triggering the build of rdopkg_master_flow. > > > > > > phase2 is invoked similar to the provision_and_snapshot build, by checking 'RUN_PHASE2' in the rdopkg_master_flow build parameters before executing a build thereof. > > > > > > Concurrency control is a side effect of requiring the user or gerrit to execute rdopkg_master_flow for every action. There can be only one rdopkg_master_flow build executing at any given time. > > > > > > Complexity, Part 2: > > > ------------------- > > > > > > In addition to the nasty complexity of using nested BuildFlow type jobs, each 'worker' job (i.e. the non-buildflow type jobs) has some built in complexity that is reflected in the amount of logic in each job's bash script definition. > > > > > > Some of this has been alluded to in previous points. For instance, each job in the phase1 flow needs to determine, for each update, if the update contains a package that requires full packstack aio provisioning from a base image (e.g. openstack-puppet-modules). This 'must provision' list needs to be stored somewhere that all jobs can read it, and it needs to be dynamic enough to add to it as requirements dictate. > > > > > > But additionally, for package sets not requiring provisioning a base image, phase1 job needs to query the backing OpenStack instance to see if there exists a 'known good' snapshot, get the images' UUIDs from OpenStack, and spin up the instances using the snapshot images. > > > > > > This baked-in complexity in the 'worker' jenkins jobs has made it difficult to maintain the job definitions, and more importantly difficult to run using jjb or in other more 'orthodox' CI-type ways. The rdopkg CI stuff is a bastard child of a fork. It lives in its own mutant gene pool. > > > > lolololol > > > > > > > A Way Forward...? > > > ---------------- > > > > > > Wes Hayutin had a good idea that might help reduce some of the complexity here as we contemplate a) making rdopkg CI public, b) moving toward rdopkg CI 0.2. > > > > > > His idea was a) stop using snapshots since the per-test-run savings doesn't seem to justify the burden they create, b) do away with BuildFlow by including the 'this update contains builds for (Release1,Dist2),...,(ReleaseN,DistM)' information in the gerrit change topic. > > > > It's easy to modify `rdopkg update` to include this information. > > However, it's redundant so you can (in theory) submit an update where > > this summary won't match the actual YAML data. That's probably > > completely irrelevant, but I'm mentioning it nonetheless :) > > > > This is less a response to Jakub's comment here and more an additional explanation of why this idea is so nice. > > Currently, when phase1_flow is triggered, it ssh's to a separate host to run `rdoupdate check` (because BuildFlow jobs execute on the jenkins master node, disregarding any setting to run them on a particular slave), and parse the output to determine what Release,Dist combinations need to be tested. > > The gerrit topic approach would allow us to have N jobs listening to gerrit trigger events, but e.g. the juno/epel-7 job would only execute if the gerrit topic matched that job's regexp. The gerrit review would only get its +2 when these jobs complete successfully. > > It would be nice to decide what that topic string ought to look like so that a) we have sanity in the regexp, b) we are sure gerrit will support long strings with whatever odd chars we may wish to use, etc. > I'll propose a format for the gerrit topic string. Let's say a build includes updates for: icehouse,fedora-20 juno,fedora-21 juno,epel-7 The resulting topic string would be: icehouse_fedora-20/juno_fedora-21_epel-7 Then a job triggered off gerrit would have a regex like '.*$release[^/]+$dist.*' So for example, the icehouse/fedora-20 phase1 job would have '.*icehouse[^/]+fedora-20.*' which would match the gerrit topic, so that test would run. Jakub, could we have `rdopkg update` generate the topic string as indicated based on the contents of the update? > > > > > I think that's a great idea, but I have a superstitious gut feeling that we may lose some 'transaction'y-ness from the current setup. For example, what happens if phase1 and phase2 overlap their execution? It's not that I have evidence that this will be a problem; it's more that we had these issues worked out fairly well with rdopkg CI 0.1, and I think the change warrants some scrutiny/thought (which clearly I have not done!). > > > > Worth a try if you ask me. I'll gladly help with any scripts/helpers > > needed, so just let me know. > > > > > > > We'd still need to work out a way to execute phase2, though. There would be no `rdopkg update` event to trigger phase2 runs. I'm not sure how we'd do that without a BuildFlow. BuildFlow jobs also allow parallelization of the child jobs, and I'm not sure how we could replicate that without using that type of job. > > > > There can be rdopkg action (i.e. rdopkg update --phase2) to do whatever > > you need if that helps. > > > > > > > Whew. I hope this was helpful. I'm saving a copy of this text to http://slinabery.fedorapeople.org/rdopkg-overview.txt > > > > Sure is, thanks! > > > > > > Cheers, > > Jakub > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Thu Feb 5 17:17:30 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 05 Feb 2015 12:17:30 -0500 Subject: [Rdo-list] [Rdo-newsletter] February 2015 RDO community newsletter Message-ID: <54D3A5AA.4070901@redhat.com> February 2015 RDO Community Newsletter Thanks, as always, for being part of the RDO community! There's a lot in this month's newletter: * Upcoming Ceilometer hangout, February 9th * Packaging effort * CFP for OpenStack Summit Vancouver closes February 9th * Naming vote for OpenStack 'L' closes February 10th * FOSDEM report * SoCal Linux Expo in 2 weeks * Ways to stay in touch! Quick links: * Quick Start - http://openstack.redhat.com/quickstart * Mailing Lists - https://openstack.redhat.com/Mailing_lists * RDO packages - https://repos.fedorapeople.org/repos/openstack/openstack-juno/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ Upcoming Hangout Mark your calendars: On February 9th, at 14:00 UTC, Eoghan Glynn will present a hangout covering what's coming for Ceilometer in the Kilo release of OpenStack. https://plus.google.com/events/cht3k5nr5u73pv3d08i7vq5m570 Kilo milestone 2 is due on February 5, so this is traditionally around the time that mid-cycle meetings happen to evaluate the status of projects, and set expectations for what will actually make it into the release. So I hope that we'll be able to schedule more of these hangouts in time between Milestone 2 and the release on April 30th. (See https://wiki.openstack.org/wiki/Kilo_Release_Schedule for the release schedule.) Packaging updates You'll remember from last month that the RDO packaging effort is now open to wider participation from the entire community. If you're interested in participating, there are several places that you should be looking for updates. First, the rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list - is where all discussion occurs around RDO development, so be sure to subscribe there to keep informed. Next, there's a weekly meeting on #rdo, on the Freenode IRC network, to discuss RDO packaging progress, and what remains to be done. This meeting occurs at 15:00 UTC each Wednesday, and is open to everyone. Finally, since OpenStack is part of the larger Cloud infrastructure community, we're participating in the CentOS Cloud SIG effort - http://wiki.centos.org/SpecialInterestGroup/Cloud - which has a weekly meeting on the #centos-devel IRC channel at 15:00 UTC every Thursday. Call for Papers, OpenStack Summit There's still a few more days to get your paper submissions in for the OpenStack Summit in Vancouver. The CFP closes on Monday, February 9th. Submit your paper today at https://www.openstack.org/summit/vancouver-2015/call-for-speakers/ The OpenStack Summit will be held in Vancouver, Canada, May 18-22 2015, and is the most important place to be if you want to know more about OpenStack, and talk with everyone that's involved in OpenStack development. Vote for OpenStack L Speaking of OpenStack Summit, a major feature of each summit is planning the next release of OpenStack. OpenStack Kilo will be released on April 30th, and the Vancouver summit will be planning the L release, which hasn't yet been named. L will be either Lizard, Love, London, or Liberty, and you can have a voice in making that decision at https://www.surveymonkey.com/r/openstack-l-naming The poll closes on February 10th, so hurry up! FOSDEM Thanks to everyone that stopped by to see us at FOSDEM. We had a lot of great conversations, with people who were completely new to OpenStack, with experts, and everything in between. We also had a good turn out on Friday at the CentOS dojo, where Haikel Guemar gave a hands-on walkthrough of installing RDO, and Jakub Ruzicka talked about RDO development, packaging and testing, and what the CentOS Cloud SIG is doing in this area. See https://www.flickr.com/photos/rbowen/sets/72157648221256884/ for some photos of Haikel and others in action. Watch the RDO blog - http://openstack.redhat.com/blog - for video from the event in the next few days. There was also a great deal of great content at FOSDEM, in the IaaS dev room - https://fosdem.org/2015/schedule/track/infrastructure_as_a_service/ - and scattered through various other tracks during the event. SCALE We'll be at SCALE - the SoCal Linux Expo - February 19-22, 2015, at the Los Angeles Airport Hilton. SCALE is a great event, with lots of community participation, great talks, and fun evening events. We'd love to see you drop by the Red Hat booth, where we'll have RDO swag aplenty. Additional information about SCALE can be found at http://www.socallinuxexpo.org/scale/13x and registration is still open at https://reg.socallinuxexpo.org/reg6/ Keep in touch There's lots of ways to stay in in touch with what's going in in the RDO community. The best ways are ... * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://tm3.org/rdogplus * Facebook - http://facebook.com/rdocommunity * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter * RDO Q&A - http://ask.openstack.org/ * IRC - #rdo on Freenode.irc.net Finally, remember that the OpenStack User Survey is always open, so every time you deploy a new OpenStack cloud, go update your records at https://www.openstack.org/user-survey/ so that, when Vancouver rolls around, we have a clearer picture of the OpenStack usage out in the wild. Thanks again for being part of the RDO community! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From vedsarkushwaha at gmail.com Thu Feb 5 18:15:11 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Thu, 5 Feb 2015 23:45:11 +0530 Subject: [Rdo-list] selinux failed to install during packstack Message-ID: Help Needed... ERROR : Error appeared during Puppet run: 192.168.0.21_prescript.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned 1: Error: Package: openstack-selinux-0.5.19-2.el7ost.noarch (openstack-juno) full trace in log is: Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@anaconda) selinux-policy-targeted = 3.12.1-153.el7 Error: Package: openstack-selinux-0.5.19-2.el7ost.noarch (openstack-juno) Requires: selinux-policy-base >= 3.12.1-153.el7_0.10 Installed: selinux-policy-targeted-3.12.1-153.el7.noarch (@anaconda) selinux-policy-base = 3.12.1-153.el7 Available: selinux-policy-minimum-3.12.1-153.el7.noarch (centos7) selinux-policy-base = 3.12.1-153.el7 Available: selinux-policy-mls-3.12.1-153.el7.noarch (centos7) selinux-policy-base = 3.12.1-153.el7 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest^[[0m I already have selinux installed. I tried with all three modes (enforcing, permissive, disabled), and touch /.autorelabel, reboot... but problem is still the same.. I'm using multinode installation, and all nodes has rdo, pupper and epel repo enabled. i tried seliunx modes on all nodes as well. But problem is still there. -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.krovich at mail.wvu.edu Fri Feb 6 04:17:53 2015 From: david.krovich at mail.wvu.edu (David Krovich) Date: Thu, 5 Feb 2015 23:17:53 -0500 Subject: [Rdo-list] Neutron_with_existing_external_network wiki In-Reply-To: References: Message-ID: <54D44071.1090003@mail.wvu.edu> As someone who struggled heavily trying to implement this I'd be willing to help with documentation. I've got a working process using packstack on fedora 20 that I've replicated a few times now. -Dave On 2/4/15 5:15 PM, Alan Pevec wrote: > Hi, > > $subject was mentioned today on #rdo and looking back in the wiki > history[1] "Possible improvements" section has been unchanged since > the initial version Sep 2013. > Has something of that been implemented in the meantime? > > > Cheers, > Alan > > [1] https://openstack.redhat.com/index.php?title=Neutron_with_existing_external_network&oldid=1344 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at gmail.com Fri Feb 6 12:25:03 2015 From: apevec at gmail.com (Alan Pevec) Date: Fri, 6 Feb 2015 13:25:03 +0100 Subject: [Rdo-list] selinux failed to install during packstack In-Reply-To: References: Message-ID: > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch > (@anaconda) You don't seem to have RHEL 7 repository enabled, you need to subscribe it first. Cheers, Alan From bderzhavets at hotmail.com Fri Feb 6 15:19:31 2015 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 6 Feb 2015 10:19:31 -0500 Subject: [Rdo-list] Nova-Docker driver on RDO Juno Compute node F21 In-Reply-To: <20150205141106.GA26774@redhat.com> References: , , <20150205141106.GA26774@redhat.com> Message-ID: Via my experience : 1. eugeneware/docker-wordpress-nginx:latest (RDO Juno on Fedora 21 TwoNode Cluster Controller&&Network and Compute ML2&OVS&VXLAN setup ) may be installed only on Controller. I can install on Compute node ( Nova-Docker driver been built via stable/juno installed on both nodes ) tutum/tomcat, tutum/mysql ubuntu-rastasheep (sshd), but not docker-wordpress-nginx. It just won't come up (two installation pages and it dies). 2. When I log into Ubuntu-Rastasheep installed on Controller I can run `apt-get update`, when I log into the same Ubuntu-Rastasheep been installed on Compute `apt-get update` hangs on waiting for headers. I can ping 8.8.8.8, but TCP/UDP services are locked . Wget doesn't work and etc .. . All this issues got fixed on similar CentOS 7 Cluster after kernel upgrade up to 3.10.0-123.20.1.el7.x86_64. I had same problems on CentOS 7 with kernel 3.10.0-123.13.2.el7.x86_64. The most recent `yum update` upgraded just kernel on CentOS 7. Thanks. Boris -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Fri Feb 6 18:45:04 2015 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 6 Feb 2015 18:45:04 +0000 Subject: [Rdo-list] OpenStack unified client inclusion in RDO Message-ID: <5D7F9996EA547448BC6C54C8C5AAF4E5010278F935@CERNXCHG41.cern.ch> How about including the openstack unified client in the default RDO Quick Start configuration ? It is a more consistent set of options for the end user and often supports later API versions. It would involve including the python-openstack client RPM but I think that would be all. Since it is client only, the risk is low as the existing clients would continue to work. At CERN, we're moving to the unified client as it supports the latest set of federation, Kerberos and X.509 plug ins which are useful in multi-cloud environments. There are a few functional gaps but these are rapidly being closed. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedsarkushwaha at gmail.com Fri Feb 6 22:45:50 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Sat, 7 Feb 2015 04:15:50 +0530 Subject: [Rdo-list] selinux failed to install during packstack In-Reply-To: References: Message-ID: thanks..that worked... after enabling rhel 7 repo, a simply 'yum update' solved the problem On Fri, Feb 6, 2015 at 5:55 PM, Alan Pevec wrote: > > Requires: selinux-policy-targeted >= 3.12.1-153.el7_0.10 > > Installed: selinux-policy-targeted-3.12.1-153.el7.noarch > > (@anaconda) > > You don't seem to have RHEL 7 repository enabled, you need to > subscribe it first. > > > Cheers, > Alan > -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedsarkushwaha at gmail.com Fri Feb 6 22:50:53 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Sat, 7 Feb 2015 04:20:53 +0530 Subject: [Rdo-list] problem with cinder Message-ID: help needed... ERROR : Error appeared during Puppet run: 10.0.2.15_cinder.pp Error: cinder type-create iscsi returned 1 instead of one of [0] I'm using centos7 on virtual box complete log /Stage[main]/Main/Cinder::Type[iscsi]/Exec[cinder type-create iscsi]/returns: ERROR: Authentication failure: Gateway Timeout (HTTP 504) Error: cinder type-create iscsi returned 1 instead of one of [0] Error: /Stage[main]/Main/Cinder::Type[iscsi]/Exec[cinder type-create iscsi]/returns: change from notrun to 0 failed: cinder type-create iscsi returned 1 instead of one of [0] Notice: /Stage[main]/Main/Cinder::Type[iscsi]/Cinder::Type_set[lvm]/Exec[cinder type-key iscsi set volume_backend_name=lvm]: Dependency Exec[cinder type-create iscsi] has failures: true Warning: /Stage[main]/Main/Cinder::Type[iscsi]/Cinder::Type_set[lvm]/Exec[cinder type-key iscsi set volume_backend_name=lvm]: Skipping because of failed dependencies -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedsarkushwaha at gmail.com Sat Feb 7 07:05:00 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Sat, 7 Feb 2015 12:35:00 +0530 Subject: [Rdo-list] problem with cinder In-Reply-To: References: Message-ID: I understand that its a "proxy" problem (because its working fine without any proxy connection network). But not able to figure it out. I set env variable http_proxy, https_proxy, ftp_proxy, no_proxy. I also defined yum http_proxy, https_proxy, ftp_proxy and wgetrc http_proxy, https_proxy, ftp_proxy. Please help... On Sat, Feb 7, 2015 at 4:20 AM, Vedsar Kushwaha wrote: > help needed... > > ERROR : Error appeared during Puppet run: 10.0.2.15_cinder.pp > Error: cinder type-create iscsi returned 1 instead of one of [0] > > I'm using centos7 on virtual box > > complete log > /Stage[main]/Main/Cinder::Type[iscsi]/Exec[cinder type-create > iscsi]/returns: ERROR: Authentication failure: Gateway Timeout (HTTP 504) > > Error: cinder type-create iscsi returned 1 instead of one of [0] > Error: /Stage[main]/Main/Cinder::Type[iscsi]/Exec[cinder type-create > iscsi]/returns: change from notrun to 0 failed: cinder type-create iscsi > returned 1 instead of one of [0] > Notice: > /Stage[main]/Main/Cinder::Type[iscsi]/Cinder::Type_set[lvm]/Exec[cinder > type-key iscsi set volume_backend_name=lvm]: Dependency Exec[cinder > type-create iscsi] has failures: true > Warning: > /Stage[main]/Main/Cinder::Type[iscsi]/Cinder::Type_set[lvm]/Exec[cinder > type-key iscsi set volume_backend_name=lvm]: Skipping because of failed > dependencies > > -- > Vedsar Kushwaha > M.Tech-Computational Science > Indian Institute of Science > -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Sat Feb 7 15:51:55 2015 From: apevec at gmail.com (Alan Pevec) Date: Sat, 7 Feb 2015 16:51:55 +0100 Subject: [Rdo-list] OpenStack unified client inclusion in RDO In-Reply-To: <5D7F9996EA547448BC6C54C8C5AAF4E5010278F935@CERNXCHG41.cern.ch> References: <5D7F9996EA547448BC6C54C8C5AAF4E5010278F935@CERNXCHG41.cern.ch> Message-ID: > How about including the openstack unified client in the default RDO Quick > Start configuration ? python-openstackclient-1.0.1 is available in RDO Juno repository, so it just needs to be added somewhere in the Packstack manifests https://review.openstack.org/153795 Cheers, Alan From apevec at gmail.com Sat Feb 7 15:59:40 2015 From: apevec at gmail.com (Alan Pevec) Date: Sat, 7 Feb 2015 16:59:40 +0100 Subject: [Rdo-list] problem with cinder In-Reply-To: References: Message-ID: > I understand that its a "proxy" problem (because its working fine without > any proxy connection network). But not able to figure it out. > I set env variable http_proxy, https_proxy, ftp_proxy, no_proxy. > I also defined yum http_proxy, https_proxy, ftp_proxy and wgetrc http_proxy, > https_proxy, ftp_proxy. If you need proxy only for yum, you can set it in yum configuration (see man yum.conf) and leave env vars clean. Cheers, Alan From vedsarkushwaha at gmail.com Sun Feb 8 12:25:17 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Sun, 8 Feb 2015 17:55:17 +0530 Subject: [Rdo-list] problem with cinder In-Reply-To: References: Message-ID: Thanks.... That worked for me.. On Feb 7, 2015 9:29 PM, "Alan Pevec" wrote: > > I understand that its a "proxy" problem (because its working fine without > > any proxy connection network). But not able to figure it out. > > I set env variable http_proxy, https_proxy, ftp_proxy, no_proxy. > > I also defined yum http_proxy, https_proxy, ftp_proxy and wgetrc > http_proxy, > > https_proxy, ftp_proxy. > > If you need proxy only for yum, you can set it in yum configuration > (see man yum.conf) and leave env vars clean. > > > Cheers, > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Mon Feb 9 13:55:04 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 09 Feb 2015 14:55:04 +0100 Subject: [Rdo-list] rdopkg update fails to fetch rdo-update git repo Message-ID: <54D8BC38.5040909@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, I'm trying to send update for RDO repos to include new neutron rebased to 2014.2.2. When I invoke 'rdopkg update', I get the following error: [ihrachyshka at ihrachyshka-t440s openstack-neutron]$ USER=ihrachys rdopkg update command failed: git clone git+ssh://code.engineering.redhat.com/rdo-update.git rdo-update stderr: Cloning into 'rdo-update'... Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. My ssh key is properly registered in code.engineering.redhat.com gerrit instance. It seems the problem is that git repo url does not include my username, hence the failure. How can I pass it to rdopkg update? I've also tried to fetch the repo manually into ~/.rdopkg, but then 'rdopkg update' removes it and retries the clone. Removing update repo: /home/ihrachyshka/.rdopkg/rdo-update command failed: git clone git+ssh://code.engineering.redhat.com/rdo-update.git rdo-update stderr: Cloning into 'rdo-update'... /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU2Lw4AAoJEC5aWaUY1u572dUH/1RmzfWYjuuooHcfS6p0ScAz HNyUxngnRnPhB1c8PIglhTtptcUCOGdjknqas6Zu9kOyZ0Q+S1BLhWV/uzhe0G36 w/Q1hMTpbyYGYetQnGTQIhlydb/7CRsGOa7Ap8FKAF/S//tps/8zsPZlNno5LAON ieYn5ewB3Uiyxfq9oKGR1N1LHyrTntKZkP9jF5ReP2c3DxRUabSVzrI/SmR9Z3bk M5wrZtrWn23q7MO/XOgNh97DWqVc3aMX5Syxdl07Ufw8l2eUZ07ej5qf99VQeNzD CtW8YWyLP77hw6Fi003z3XFuvKSCXuYgavrwbUc0mSmvmMjIwZabBuHmeth2EbE= =BgGJ -----END PGP SIGNATURE----- From ihrachys at redhat.com Mon Feb 9 13:57:20 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 09 Feb 2015 14:57:20 +0100 Subject: [Rdo-list] rdopkg update fails to fetch rdo-update git repo In-Reply-To: <54D8BC38.5040909@redhat.com> References: <54D8BC38.5040909@redhat.com> Message-ID: <54D8BCC0.7060705@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 UPD: a workaround is to pass update repo thru -u option: rdopkg update -u git+ssh://@code.engineering.redhat.com/rdo-update.git On 02/09/2015 02:55 PM, Ihar Hrachyshka wrote: > Hi all, > > I'm trying to send update for RDO repos to include new neutron > rebased to 2014.2.2. When I invoke 'rdopkg update', I get the > following error: > > [ihrachyshka at ihrachyshka-t440s openstack-neutron]$ USER=ihrachys > rdopkg update command failed: git clone > git+ssh://code.engineering.redhat.com/rdo-update.git rdo-update > stderr: Cloning into 'rdo-update'... Permission denied > (publickey). fatal: Could not read from remote repository. > > Please make sure you have the correct access rights and the > repository exists. > > My ssh key is properly registered in code.engineering.redhat.com > gerrit instance. > > It seems the problem is that git repo url does not include my > username, hence the failure. How can I pass it to rdopkg update? > > I've also tried to fetch the repo manually into ~/.rdopkg, but > then 'rdopkg update' removes it and retries the clone. > > Removing update repo: /home/ihrachyshka/.rdopkg/rdo-update command > failed: git clone > git+ssh://code.engineering.redhat.com/rdo-update.git rdo-update > stderr: Cloning into 'rdo-update'... > > /Ihar > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU2LzAAAoJEC5aWaUY1u572DkH+gIuly/OxBHOekcjMhH6v//3 llyKfmArlbChw441/Ox7jlBd4X/MQEsuBHlLtVL9w6n/6PeDmG8bRjVgJTbpVi3p OY6/Nq4+PvQR0RnBjNj1rgEPLPRNlafQaUkRPxH4mk7FvU1fTO/s09s9ok0Ge6Xg nDiQ1p1oRkGOFBz0vbD1W5To+Kv8PiBIrWteyjfIwK9xJQCr55FGItLYirfDAcB4 Elvml5FVVkXqij4b4siYCaAmQVin205PT997VG1lmmSt7ieqmebm59SRVcZIoxOO M3eGz2PFKaoxsn/rJICHSuhoMKeOz7VaCLwQbehBcewoDQUlYgT1Dk9vVzoeJ04= =WJMy -----END PGP SIGNATURE----- From apevec at gmail.com Mon Feb 9 14:26:21 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 9 Feb 2015 15:26:21 +0100 Subject: [Rdo-list] rdopkg update fails to fetch rdo-update git repo In-Reply-To: <54D8BCC0.7060705@redhat.com> References: <54D8BC38.5040909@redhat.com> <54D8BCC0.7060705@redhat.com> Message-ID: > UPD: a workaround is to pass update repo thru -u option: > > rdopkg update -u > git+ssh://@code.engineering.redhat.com/rdo-update.git Sorry about that, rdopkg tool was updated to point to the new public gerrit which is not ready yet, your workaround is correct but of course only Red Hat folks can use it for now. Moving rdoupdate.git to the public gerrit is a priority, Steve is in the process of reconfiguring CI jobs. Cheers, Alan From hguemar at fedoraproject.org Mon Feb 9 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 9 Feb 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150209150003.BB70960ABC33@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-02-11 from 15:00:00 to 16:00:00 UTC The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every two weeks on #rdo on freenode Source: https://apps.fedoraproject.org/calendar//meeting/2017/ From alon.dotan at Contextream.com Mon Feb 9 12:44:00 2015 From: alon.dotan at Contextream.com (Alon Dotan) Date: Mon, 9 Feb 2015 12:44:00 +0000 Subject: [Rdo-list] High Availability configuration Message-ID: Dear All, Someone managed to configure High Availability? My setup contains 2 CentOS 7 controllers and about 15 compute nodes, I want to configure High Availability between the controllers only Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Feb 9 15:21:46 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 09 Feb 2015 10:21:46 -0500 Subject: [Rdo-list] OpenStack Summit CFP closes TODAY Message-ID: <54D8D08A.3030805@redhat.com> A friendly reminder that the OpenStack Summit CFP closes TODAY. Don't wait. Get your presentations in as soon as possible, at https://www.openstack.org/summit/vancouver-2015/call-for-speakers/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rbowen at redhat.com Mon Feb 9 16:54:03 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 09 Feb 2015 11:54:03 -0500 Subject: [Rdo-list] RDO/OpenStack meetups coming up (Feb 9, 2015) Message-ID: <54D8E62B.60707@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If there's a meetup in your area, please consider attending. It's the best way to find out what interesting things are going on in the larger community, and a great way to make contacts that will help you solve your own problems in the future. --Rich * Wednesday, February 11 in New York, NY, US: Deep dive on ManageIQ bt RedHat & "I Can Haz Moar Networks?" by Midokura - http://www.meetup.com/OpenStack-for-Enterprises-NYC/events/219853523/ * Thursday, February 12 in Mountain View, CA, US: Online Meetup: Building a cloud ready linux image locally using KVM - http://www.meetup.com/Cloud-Online-Meetup/events/220346025/ * Thursday, February 12 in Englewood, CO, US: Intro to OpenStack - http://www.meetup.com/RockyMountainCiscoUsersGroup/events/219386099/ * Thursday, February 12 in Philadelphia, PA, US: Deep dive on ManageIQ by Red hat & ?I Can Haz Moar Networks?? by Midokura - http://www.meetup.com/Philly-OpenStack-Meetup-Group/events/219793923/ * Thursday, February 12 in Portland, OR, US: OpenStack / IaaS Overview by Al Kari of Red Hat - http://www.meetup.com/PortlandRedHatUserGroup/events/219878095/ * Friday, February 13 in Bangalore, IN: Gnunify OpenStack Mini Conference 2015 - http://www.meetup.com/Indian-OpenStack-User-Group/events/220245382/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From pmyers at redhat.com Mon Feb 9 17:12:39 2015 From: pmyers at redhat.com (Perry Myers) Date: Mon, 09 Feb 2015 17:12:39 +0000 Subject: [Rdo-list] High Availability configuration In-Reply-To: References: Message-ID: <54D8EA87.2040608@redhat.com> On 02/09/2015 12:44 PM, Alon Dotan wrote: > Dear All, > > Someone managed to configure High Availability? > > My setup contains 2 CentOS 7 controllers and about 15 compute nodes, I think an HA config will require a minimum of 3 controller nodes (primarily because RabbitMQ and Galera operate in odd numbered clusters) abeekhof is working on getting better docs on the HA stuff on our wiki but you can ask questions here though > I want to configure High Availability between the controllers only > > Thanks, > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From erwan at erwan.com Mon Feb 9 17:34:35 2015 From: erwan at erwan.com (Erwan Gallen) Date: Mon, 9 Feb 2015 18:34:35 +0100 Subject: [Rdo-list] High Availability configuration In-Reply-To: <54D8EA87.2040608@redhat.com> References: <54D8EA87.2040608@redhat.com> Message-ID: <9717C57F-CE40-4E12-8188-FDA9D72828AA@erwan.com> You can find the default RDO HA config with 3 controllers here: https://openstack.redhat.com/HA_Architecture The doc is not completely up to date for MariaDB, Neutron and AMQP. Cheers Erwan > Le 9 f?vr. 2015 ? 18:12, Perry Myers a ?crit : > > On 02/09/2015 12:44 PM, Alon Dotan wrote: >> Dear All, >> >> Someone managed to configure High Availability? >> >> My setup contains 2 CentOS 7 controllers and about 15 compute nodes, > > I think an HA config will require a minimum of 3 controller nodes > (primarily because RabbitMQ and Galera operate in odd numbered clusters) > > abeekhof is working on getting better docs on the HA stuff on our wiki > but you can ask questions here though > >> I want to configure High Availability between the controllers only >> >> Thanks, >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alon.dotan at Contextream.com Mon Feb 9 18:04:30 2015 From: alon.dotan at Contextream.com (Alon Dotan) Date: Mon, 9 Feb 2015 18:04:30 +0000 Subject: [Rdo-list] High Availability configuration In-Reply-To: <9717C57F-CE40-4E12-8188-FDA9D72828AA@erwan.com> References: <54D8EA87.2040608@redhat.com>, <9717C57F-CE40-4E12-8188-FDA9D72828AA@erwan.com> Message-ID: <1423505069360.18650@Contextream.com> well, I can add one more controller this is not an issue, there is any technical doc? I understood the architecture but I have hard time to implement it there is any reference to CentOS 7? I need it for Juno currently I have issues with Galera sync, there is no technical guide about Galera and Centos 7 Thanks, ________________________________ From: Erwan Gallen Sent: Monday, February 9, 2015 7:34 PM To: Perry Myers; Alon Dotan; rdo-list at redhat.com; Andrew Beekhof; David Vossel Subject: Re: [Rdo-list] High Availability configuration You can find the default RDO HA config with 3 controllers here: https://openstack.redhat.com/HA_Architecture The doc is not completely up to date for MariaDB, Neutron and AMQP. Cheers Erwan Le 9 f?vr. 2015 ? 18:12, Perry Myers > a ?crit : On 02/09/2015 12:44 PM, Alon Dotan wrote: Dear All, Someone managed to configure High Availability? My setup contains 2 CentOS 7 controllers and about 15 compute nodes, I think an HA config will require a minimum of 3 controller nodes (primarily because RabbitMQ and Galera operate in odd numbered clusters) abeekhof is working on getting better docs on the HA stuff on our wiki but you can ask questions here though I want to configure High Availability between the controllers only Thanks, _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Mon Feb 9 18:48:06 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 9 Feb 2015 18:48:06 +0000 Subject: [Rdo-list] [RDO] RDO Blog Roundup, Feb 9, 2015 Message-ID: <0000014b6fab853f-a82db359-bfb1-4104-a543-b39982e406d8-000000@email.amazonses.com> rbowen started a discussion. RDO Blog Roundup, Feb 9, 2015 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/1001/rdo-blog-roundup-feb-9-2015 Have a great day! From xzhao at bnl.gov Wed Feb 11 15:52:27 2015 From: xzhao at bnl.gov (Zhao, Xin) Date: Wed, 11 Feb 2015 10:52:27 -0500 Subject: [Rdo-list] expand provider network Message-ID: <54DB7ABB.80107@bnl.gov> Hello, I would like to expand an existing provider network from XXX.0/27 to XXX.0/24, I wonder if the "neutron subnet-update" command will do it, or I need to delete the VM networks/routers/provider networks and recreate them? Thanks, Xin From hguemar at fedoraproject.org Wed Feb 11 15:56:02 2015 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 11 Feb 2015 16:56:02 +0100 Subject: [Rdo-list] [meeting] RDO Packaging meeting minutes (2015-02-11) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-02-11) ======================================== Meeting started by apevec at 15:05:48 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-02-11/rdo.2015-02-11-15.05.log.html . Meeting summary --------------- * rollcall (apevec, 15:06:10) * agenda at https://etherpad.openstack.org/p/RDO-Packaging (apevec, 15:06:29) * 2014.2.2 updates, status in f22/RDO (apevec, 15:08:31) * keep Fedora master Juno and update there first, then merge to f22 and build there (apevec, 15:11:09) * EL7 builds will come from CentOS CBS Koji apevec and hguemar are handling that (apevec, 15:11:29) * Kilo import from Delorean to Fedora Rawhide in Kilo RC time frame (apevec, 15:11:50) * ETA to have all 2014.2.2 builds done today (apevec, 15:12:57) * please update to the latest rdopkg 0.24: dnf copr enable jruzicka/rdopkg (apevec, 15:13:58) * packstack/opm update should fix fedora juno issue with kernel 3.18 (apevec, 15:15:41) * Kilo-2 snapshot of http://trunk.rdoproject.org/repos/current/ (apevec, 15:17:47) * ACTION: apevec to follow up with gchamoul re. Kilo Packstack/OPM (apevec, 15:22:21) * EL6 Juno updates (apevec, 15:22:57) * ACTION: follow up with alphacc about Juno/EL6 packages (number80, 15:24:50) * ACTION: alphacc will post EL6 Juno Nova spec on the mailing (apevec, 15:33:10) * Review Request: openstack-barbican (apevec, 15:34:28) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1190269 (apevec, 15:34:43) * ACTION: hguemar reviewing openstack-barbican (number80, 15:39:55) * open floor (apevec, 15:49:15) Meeting ended at 15:52:28 UTC. Action Items ------------ * apevec to follow up with gchamoul re. Kilo Packstack/OPM * follow up with alphacc about Juno/EL6 packages * alphacc will post EL6 Juno Nova spec on the mailing * hguemar reviewing openstack-barbican Action Items, by person ----------------------- * alphacc * follow up with alphacc about Juno/EL6 packages * alphacc will post EL6 Juno Nova spec on the mailing * apevec * apevec to follow up with gchamoul re. Kilo Packstack/OPM * **UNASSIGNED** * hguemar reviewing openstack-barbican People Present (lines said) --------------------------- * apevec (95) * number80 (34) * xaeth (22) * alphacc (6) * ihrachyshka (6) * eggmaster (6) * zodbot (4) * ryansb (2) * rbowen (1) * weshay (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From alon.dotan at Contextream.com Wed Feb 11 15:59:52 2015 From: alon.dotan at Contextream.com (Alon Dotan) Date: Wed, 11 Feb 2015 15:59:52 +0000 Subject: [Rdo-list] expand provider network In-Reply-To: <54DB7ABB.80107@bnl.gov> References: <54DB7ABB.80107@bnl.gov> Message-ID: <0dupeckdk59cj44jq940rr7n.1423670389670@com.syntomo.email> well, you cant update subnet, you can modify via the db or delete and create new net Sent by MailWise ? Your emails, with style.:) -------- Original Message -------- From: "Zhao, Xin" Sent: Wednesday, February 11, 2015 05:53 PM To: rdo-list Subject: [Rdo-list] expand provider network -------------- next part -------------- An HTML attachment was scrubbed... URL: From mloza at morphlabs.com Wed Feb 11 16:45:11 2015 From: mloza at morphlabs.com (Mark Loza) Date: Thu, 12 Feb 2015 00:45:11 +0800 Subject: [Rdo-list] Problem with Cinder (Juno) and Ceph In-Reply-To: <1535060816.1565352.1421319898465.JavaMail.yahoo@jws11125.mail.ir2.yahoo.com> References: <1535060816.1565352.1421319898465.JavaMail.yahoo@jws11125.mail.ir2.yahoo.com> Message-ID: <54DB8717.4050106@morphlabs.com> Hi, I hope I'm not too late to answer your question. I encountered the same issue with yours last week and I found out that you need to comment out enabled_backends=lvm in cinder.conf and restart cinder. HTH On 1/15/15 7:04 PM, Walter Valenti wrote: > Hi, > we're trying Juno with Ceph (Giant) integration on Centos7. > > The problem is that Cinder-Volume ignores, the directives > for use rbd protocol, but every uses LVM. > > This is our rbd configuration in cinder.conf: > > [DEFAULT] > > volume_driver=cinder.volume.drivers.rbd.RBDDriver > [rbd] > rbd_pool=volumes > rbd_user=cinder > rbd_ceph_conf=etc/ceph/ceph.conf > rbd_flatten_volume_from_snapshot=false > rbd_secret_uuid=ac94d7df-4271-4080-b1fd-8cf577bf7471 > rbd_max_clone_depth=5 > rbd_store_chunk_size=4 > rados_connect_timeout=-1 > > We also tried with all configuration in "rdb" section, and > with all configuration in DEFAULT section, but the problem is the same. > > > Any ideas?? > > Thanks > Walter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From mloza at morphlabs.com Wed Feb 11 16:47:17 2015 From: mloza at morphlabs.com (Mark Loza) Date: Thu, 12 Feb 2015 00:47:17 +0800 Subject: [Rdo-list] Problem with Cinder (Juno) and Ceph In-Reply-To: <1535060816.1565352.1421319898465.JavaMail.yahoo@jws11125.mail.ir2.yahoo.com> References: <1535060816.1565352.1421319898465.JavaMail.yahoo@jws11125.mail.ir2.yahoo.com> Message-ID: <54DB8795.40502@morphlabs.com> Hi, I hope I'm not too late to answer your question. I encountered the same issue with yours last week and I found out that you need to comment out enabled_backends=lvm in cinder.conf and restart cinder. HTH On 1/15/15 7:04 PM, Walter Valenti wrote: > Hi, > we're trying Juno with Ceph (Giant) integration on Centos7. > > The problem is that Cinder-Volume ignores, the directives > for use rbd protocol, but every uses LVM. > > This is our rbd configuration in cinder.conf: > > [DEFAULT] > > volume_driver=cinder.volume.drivers.rbd.RBDDriver > [rbd] > rbd_pool=volumes > rbd_user=cinder > rbd_ceph_conf=etc/ceph/ceph.conf > rbd_flatten_volume_from_snapshot=false > rbd_secret_uuid=ac94d7df-4271-4080-b1fd-8cf577bf7471 > rbd_max_clone_depth=5 > rbd_store_chunk_size=4 > rados_connect_timeout=-1 > > We also tried with all configuration in "rdb" section, and > with all configuration in DEFAULT section, but the problem is the same. > > > Any ideas?? > > Thanks > Walter > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From xzhao at bnl.gov Thu Feb 12 22:01:06 2015 From: xzhao at bnl.gov (Zhao, Xin) Date: Thu, 12 Feb 2015 17:01:06 -0500 Subject: [Rdo-list] instance ips not reusable ? Message-ID: <54DD22A2.3080008@bnl.gov> Hello, I notice that, in my icehouse testbed that runs neutron in OVS/VLAN mode, ips from the deleted instances don't seem to be reassigned to new instances, new instances always get the next available ip from the range incrementally. I am afraid this will cause problem eventually. On old releases, the ips of the deleted instances get reused pretty quickly on new instances. Do I miss some configuration option? Thanks, Xin From dmitry at athabascau.ca Thu Feb 12 22:14:26 2015 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Thu, 12 Feb 2015 15:14:26 -0700 Subject: [Rdo-list] instance ips not reusable ? In-Reply-To: <54DD22A2.3080008@bnl.gov> References: <54DD22A2.3080008@bnl.gov> Message-ID: <54DD25C2.6030906@athabascau.ca> On 02/12/2015 03:01 PM, Zhao, Xin wrote: > Hello, > > I notice that, in my icehouse testbed that runs neutron in OVS/VLAN > mode, ips from the deleted instances don't seem to be reassigned to new > instances, new instances always get the next available ip from the range > incrementally. I am afraid this will cause problem eventually. On old > releases, the ips of the deleted instances get reused pretty quickly on > new instances. Do I miss some configuration option? In our case - IP that was allocated and associated with an instance stays allocated after instance is shut down, and I am able to associate it with the new VMs just fine. How are you doing your association? Some scripting perhaps? -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From xzhao at bnl.gov Fri Feb 13 01:31:32 2015 From: xzhao at bnl.gov (Xin Zhao) Date: Thu, 12 Feb 2015 20:31:32 -0500 Subject: [Rdo-list] instance ips not reusable ? In-Reply-To: <54DD25C2.6030906@athabascau.ca> References: <54DD22A2.3080008@bnl.gov> <54DD25C2.6030906@athabascau.ca> Message-ID: <54DD53F4.8020508@bnl.gov> Hi Dmitry, On 2/12/2015 5:14 PM, Dmitry Makovey wrote: > On 02/12/2015 03:01 PM, Zhao, Xin wrote: >> Hello, >> >> I notice that, in my icehouse testbed that runs neutron in OVS/VLAN >> mode, ips from the deleted instances don't seem to be reassigned to new >> instances, new instances always get the next available ip from the range >> incrementally. I am afraid this will cause problem eventually. On old >> releases, the ips of the deleted instances get reused pretty quickly on >> new instances. Do I miss some configuration option? > In our case - IP that was allocated and associated with an instance > stays allocated after instance is shut down, and I am able to associate > it with the new VMs just fine. > > How are you doing your association? Some scripting perhaps? Are you talking about floating ips that are allocated and associated with running instances? What I am asking is about the regular ips in the VM network, that are assigned by DHCP automatically upon boot time. Thanks, Xin From kchamart at redhat.com Fri Feb 13 17:24:56 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 13 Feb 2015 18:24:56 +0100 Subject: [Rdo-list] RDO Bug triage day on Tuesday [17FEB2015] Message-ID: <20150213172456.GM26717@tesla.redhat.com> Heya, This month's RDO bug triage day is on 13FEB2015. If you have some spare cycles, please join in helping triage bugs/root-cause analysis in your area of expertise. Here's some details to get started[1] with bug triaging. Briefly, current state[2] of RDO bugs as of today: - NEW, ASSIGNED, ON_DEV : 227 - MODIFIED, POST, ON_QA : 142 [1] Bugzilla queries -- https://openstack.redhat.com/RDO-BugTriage#Bugzilla_queries [2] Said bugs and their descriptions in plain text -- https://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/2015/all-rdo-bugs-13-FEB-2015.txt -- /kashyap From kchamart at redhat.com Fri Feb 13 17:39:37 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 13 Feb 2015 18:39:37 +0100 Subject: [Rdo-list] RDO Bug triage day on Tuesday [17FEB2015] In-Reply-To: <20150213172456.GM26717@tesla.redhat.com> References: <20150213172456.GM26717@tesla.redhat.com> Message-ID: <20150213173937.GA22356@tesla.redhat.com> On Fri, Feb 13, 2015 at 06:24:56PM +0100, Kashyap Chamarthy wrote: > Heya, > > This month's RDO bug triage day is on 13FEB2015. Err, that's a typo in the date, it should be: 17FEB2015 :-) (Thanks Luigi Toscano for pointing this out to me on IRC.) > If you have some spare > cycles, please join in helping triage bugs/root-cause analysis in your > area of expertise. Here's some details to get started[1] with bug > triaging. > > Briefly, current state[2] of RDO bugs as of today: > > - NEW, ASSIGNED, ON_DEV : 227 > - MODIFIED, POST, ON_QA : 142 > > > [1] Bugzilla queries -- > https://openstack.redhat.com/RDO-BugTriage#Bugzilla_queries > [2] Said bugs and their descriptions in plain text -- > https://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/2015/all-rdo-bugs-13-FEB-2015.txt -- /kashyap From whayutin at redhat.com Sun Feb 15 15:04:15 2015 From: whayutin at redhat.com (whayutin) Date: Sun, 15 Feb 2015 10:04:15 -0500 Subject: [Rdo-list] rdo trunk failing Message-ID: <1424012655.7611.4.camel@redhat.com> f20 rdo trunk currently failing https://prod-rdojenkins.rhcloud.com/view/RDO-Trunk/job/khaleesi-rdo-juno-delorean-fedora-20-aio-packstack-neutron-ml2-vxlan-qpidd-tempest-rpm-minimal/9/console [0;36mDebug: Executing '/sbin/service httpd status' Debug: Executing '/sbin/chkconfig httpd' Debug: Executing '/sbin/service httpd start' Error: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1:  Error: /Stage[main]/Apache::Service/Service[httpd]/ensure: change from stopped to running failed: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1:  Debug: Executing '/sbin/service httpd status' Debug: /Stage[main]/Apache::Service/Service[httpd]: Skipping restart; service is not running Notice: /Stage[main]/Apache::Service/Service[httpd]: Triggered 'refresh' from 1 events From gchamoul at redhat.com Sun Feb 15 15:20:01 2015 From: gchamoul at redhat.com (Gael Chamoulaud) Date: Sun, 15 Feb 2015 10:20:01 -0500 (EST) Subject: [Rdo-list] =?utf-8?q?rdo_trunk_failing?= In-Reply-To: <1424012655.7611.4.camel@redhat.com> References: <1424012655.7611.4.camel@redhat.com> Message-ID: <568705068.12828351.1424013601050.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> Hi Wes, Will take a look at that this evening when I will be back home. Ga?l Sent from my Android phone using TouchDown (www.nitrodesk.com) -----Original Message----- From: whayutin [whayutin at redhat.com] Received: Sunday, 15 Feb 2015, 4:04PM To: Rdo-list at redhat.com [rdo-list at redhat.com] Subject: [Rdo-list] rdo trunk failing f20 rdo trunk currently failing https://prod-rdojenkins.rhcloud.com/view/RDO-Trunk/job/khaleesi-rdo-juno-delorean-fedora-20-aio-packstack-neutron-ml2-vxlan-qpidd-tempest-rpm-minimal/9/console [0;36mDebug: Executing '/sbin/service httpd status' Debug: Executing '/sbin/chkconfig httpd' Debug: Executing '/sbin/service httpd start' Error: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1:  Error: /Stage[main]/Apache::Service/Service[httpd]/ensure: change from stopped to running failed: Could not start Service[httpd]: Execution of '/sbin/service httpd start' returned 1:  Debug: Executing '/sbin/service httpd status' Debug: /Stage[main]/Apache::Service/Service[httpd]: Skipping restart; service is not running Notice: /Stage[main]/Apache::Service/Service[httpd]: Triggered 'refresh' from 1 events _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmeggins at redhat.com Sun Feb 15 19:48:20 2015 From: rmeggins at redhat.com (Rich Megginson) Date: Sun, 15 Feb 2015 12:48:20 -0700 Subject: [Rdo-list] rdo trunk failing In-Reply-To: <568705068.12828351.1424013601050.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> References: <1424012655.7611.4.camel@redhat.com> <568705068.12828351.1424013601050.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> Message-ID: <54E0F804.4060305@redhat.com> On 02/15/2015 08:20 AM, Gael Chamoulaud wrote: > Hi Wes, > > Will take a look at that this evening when I will be back home. > > Ga?l > > > Sent from my Android phone using TouchDown (www.nitrodesk.com) > > Is this an selinux issue? > -----Original Message----- > From: whayutin [whayutin at redhat.com] > Received: Sunday, 15 Feb 2015, 4:04PM > To: Rdo-list at redhat.com [rdo-list at redhat.com] > Subject: [Rdo-list] rdo trunk failing > > > f20 rdo trunk currently failing > https://prod-rdojenkins.rhcloud.com/view/RDO-Trunk/job/khaleesi-rdo-juno-delorean-fedora-20-aio-packstack-neutron-ml2-vxlan-qpidd-tempest-rpm-minimal/9/console > > > > [0;36mDebug: Executing '/sbin/service httpd status' > Debug: Executing '/sbin/chkconfig httpd' > Debug: Executing '/sbin/service httpd start' > Error: Could not start Service[httpd]: Execution of > '/sbin/service httpd start' returned 1:  > Error: /Stage[main]/Apache::Service/Service[httpd]/ensure: change > from stopped to running failed: Could not start Service[httpd]: > Execution of '/sbin/service httpd start' returned 1:  > Debug: Executing '/sbin/service httpd status' > Debug: /Stage[main]/Apache::Service/Service[httpd]: Skipping > restart; service is not running > Notice: /Stage[main]/Apache::Service/Service[httpd]: Triggered > 'refresh' from 1 events > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at gmail.com Sun Feb 15 23:24:03 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 16 Feb 2015 00:24:03 +0100 Subject: [Rdo-list] rdo trunk failing In-Reply-To: <54E0F804.4060305@redhat.com> References: <1424012655.7611.4.camel@redhat.com> <568705068.12828351.1424013601050.JavaMail.zimbra@zmail12.collab.prod.int.phx2.redhat.com> <54E0F804.4060305@redhat.com> Message-ID: > Is this an selinux issue? Nope, from var/log/messages Feb 15 08:18:10 packstack httpd: AH00526: Syntax error on line 1 of /etc/httpd/conf.d/deflate.conf: Feb 15 08:18:10 packstack httpd: Invalid command 'AddOutputFilterByType', perhaps misspelled or defined by a module not included in the server configuration httpd-2.4.10-1.fc20.x86_64 Trunk openstack-puppet-modules RPM is tracking master branch of puppetlabs-apache and looks like it introduced a bug mishandling 2.2 vs 2.4 differences, mod_deflate was superseded by mod_filter in 2.4. Cheers, Alan From dpkshetty at gmail.com Mon Feb 16 13:33:52 2015 From: dpkshetty at gmail.com (Deepak Shetty) Date: Mon, 16 Feb 2015 14:33:52 +0100 Subject: [Rdo-list] Query on RDO networking Message-ID: Hi, Do we have any documentation on whats the right way in RDO for the Nova VMs connect to external network (eg: 8.8.8.8). I asked this on #rdo but didn't get a response, hence this mail. I am aware of the br-ex - eth0/1/2 magic (i know it from my devstack experience), but wondering if we need to do the same manually on the RDO network node or there is some automation way of achieving it in RDO ? I am on a 3-node RDO setup (Controller, Network, Compute), using rdo-release-juno-1.noarch thanx, deepak -------------- next part -------------- An HTML attachment was scrubbed... URL: From madko77 at gmail.com Mon Feb 16 14:39:13 2015 From: madko77 at gmail.com (Madko) Date: Mon, 16 Feb 2015 14:39:13 +0000 Subject: [Rdo-list] failed to flow_del Message-ID: Hi, we have a lot of flood in the logs about "failed to flow_del". /var/log/openvswitch/ovs-vswitchd.log-20150215:2015-02-13T17:43:54.555Z|00011|dpif(revalidator_7)|WARN|system at ovs-system: failed to flow_del (No such file or directory) skb_priority(0),in_port(6),skb_mark(0),eth(src=00:1a:a0:28:ca:cc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=4001,pcp=0),encap(eth_type(0x0806),arp(sip=10.156.29.184,tip=10.156.20.110,op=1,sha=00:1a:a0:28:ca:cc,tha=00:00:00:00:00:00)) Seems ubuntu has same bug : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1408972 We are using kernel 3.10.0-123.20.1.el7.x86_64 and openvswitch-2.1.2-2.el7.centos.1.x86_64 Is it a known bug on RDO? Best regards, Edouard -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Feb 16 15:00:03 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 16 Feb 2015 15:00:03 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150216150003.520EF60ABC33@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-02-18 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org//calendar//meeting/2017/ From elias.moreno.tec at gmail.com Mon Feb 16 15:39:48 2015 From: elias.moreno.tec at gmail.com (=?UTF-8?B?RWzDrWFzIERhdmlk?=) Date: Mon, 16 Feb 2015 11:09:48 -0430 Subject: [Rdo-list] Query on RDO networking In-Reply-To: References: Message-ID: As far as I remember, the br-ex situation was tricky on rdo. You could read something about it here: https://openstack.redhat.com/Neutron_with_existing_external_network (could be obsolete) At some point, rdo was able to add br-ex, but did it wrong, leaving users without connectivity, there was some work/patch on the way to fix how puppet handled br-ex setup (as per document above) but I don't know what happened to it. In my own experience (haven't tested packstack since icehouse), having br-ex already configured with an slave connected to the external network prior to running packstack was the best way to have everything working as it was supposed to after rdo finished On Mon, Feb 16, 2015 at 9:03 AM, Deepak Shetty wrote: > Hi, > Do we have any documentation on whats the right way in RDO for the Nova > VMs connect to external network (eg: 8.8.8.8). I asked this on #rdo but > didn't get a response, hence this mail. > > I am aware of the br-ex - eth0/1/2 magic (i know it from my devstack > experience), but wondering if we need to do the same manually on the RDO > network node or there is some automation way of achieving it in RDO ? > > I am on a 3-node RDO setup (Controller, Network, Compute), using > rdo-release-juno-1.noarch > > thanx, > deepak > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- El?as David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Mon Feb 16 16:32:54 2015 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 16 Feb 2015 17:32:54 +0100 Subject: [Rdo-list] Query on RDO networking In-Reply-To: References: Message-ID: Hi Deepak, here is my post about a 2 node deployment, for a 3 node, you shall configure br-ex on the network node: http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-Node-Deployment-On-CentOS-7-With-Packstack HTH, Arash On Mon, Feb 16, 2015 at 4:39 PM, El?as David wrote: > As far as I remember, the br-ex situation was tricky on rdo. You could > read something about it here: > https://openstack.redhat.com/Neutron_with_existing_external_network > (could be obsolete) > > At some point, rdo was able to add br-ex, but did it wrong, leaving users > without connectivity, there was some work/patch on the way to fix how > puppet handled br-ex setup (as per document above) but I don't know what > happened to it. > > In my own experience (haven't tested packstack since icehouse), having > br-ex already configured with an slave connected to the external network > prior to running packstack was the best way to have everything working as > it was supposed to after rdo finished > > On Mon, Feb 16, 2015 at 9:03 AM, Deepak Shetty > wrote: > >> Hi, >> Do we have any documentation on whats the right way in RDO for the Nova >> VMs connect to external network (eg: 8.8.8.8). I asked this on #rdo but >> didn't get a response, hence this mail. >> >> I am aware of the br-ex - eth0/1/2 magic (i know it from my devstack >> experience), but wondering if we need to do the same manually on the RDO >> network node or there is some automation way of achieving it in RDO ? >> >> I am on a 3-node RDO setup (Controller, Network, Compute), using >> rdo-release-juno-1.noarch >> >> thanx, >> deepak >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > El?as David. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Feb 16 19:29:13 2015 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 16 Feb 2015 14:29:13 -0500 Subject: [Rdo-list] RDO/OpenStack meetups coming up (Feb 16, 2015) Message-ID: <54E24509.2080603@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If there's a meetup in your area, please consider attending. It's the best way to find out what interesting things are going on in the larger community, and a great way to make contacts that will help you solve your own problems in the future. --Rich * Monday, February 16 in Guadalajara, MX: OpenStack & Docker - http://www.meetup.com/OpenStack-GDL/events/220237882/ * Tuesday, February 17 in Calgary, AB, CA: OpenStack Networking and Data Storage solutions - http://www.meetup.com/Calgary-OpenStack-Meetup/events/219945084/ * Tuesday, February 17 in Chesterfield, MO, US: OpenStack Object Storage - http://www.meetup.com/OpenStack-STL/events/220318049/ * Wednesday, February 18 in Helsinki, FI: OpenShift Users Meetup - http://www.meetup.com/RedHatFinland/events/219689228/ * Wednesday, February 18 in Cambridge, MA, US: Platform as a Service (PaaS) and OpenStack Architecture / Orchestration - http://www.meetup.com/Cloud-Centric-Boston/events/220265219/ * Wednesday, February 18 in Pasadena, CA, US: What is Trove? OpenStack L.A. February '15 Meetup - http://www.meetup.com/OpenStack-LA/events/219262037/ * Wednesday, February 18 in Stuttgart, DE: 1. Treffen - http://www.meetup.com/OpenStack-Baden-Wuerttemberg/events/219990894/ * Wednesday, February 18 in Santa Monica, CA, US: February 2015 OpenStack LA - OpenStack Trove Project - http://www.meetup.com/LAWebSpeed/events/220282039/ * Thursday, February 19 in Los Angeles, CA, US: SCaLE 13x - http://www.meetup.com/LinuxLA/events/219676387/ * Thursday, February 19 in Vancouver, BC, CA: OpenStack Networking and Data storage solutions - http://www.meetup.com/Vancouver-OpenStack-Meetup/events/220329956/ * Thursday, February 19 in Boston, MA, US: Double Header Meetup! OpenStack and VMWare Integration Best Practices - http://www.meetup.com/Openstack-Boston/events/218863008/ * Thursday, February 19 in Baltimore, MD, US: OpenStack Baltimore Meetup #2 - http://www.meetup.com/OpenStack-Baltimore/events/219933731/ * Thursday, February 19 in Austin, TX, US: Speed OpenStack NFV deployments with PLUMgrid - http://www.meetup.com/OpenStack-Austin/events/218909556/ * Thursday, February 19 in Whittier, CA, US: Introduction to Red Hat and OpenShift (cohosted with Cal Poly Pomona) - http://www.meetup.com/Southern-California-Red-Hat-User-Group-RHUG/events/216824212/ * Thursday, February 19 in Sunnyvale, CA, US: Openstack, Containers, and the Private Cloud - http://www.meetup.com/BayLISA/events/219854114/ * Monday, February 23 in Saint Paul, MN, US: Kicking off 2015 with a huge Minnesota OpenStack Meetup! - http://www.meetup.com/Minnesota-OpenStack-Meetup/events/219791086/ * Monday, February 23 in Melbourne, AU: Australian OpenStack User Group - Quarterly Brisbane Meetup - http://www.meetup.com/Australian-OpenStack-User-Group/events/201085722/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Mon Feb 16 21:01:01 2015 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 16 Feb 2015 21:01:01 +0000 Subject: [Rdo-list] [RDO] RDO blog roundup, February 16, 2015 Message-ID: <0000014b9431b8d8-f9263436-aa39-41e1-975a-3b9fc672a374-000000@email.amazonses.com> rbowen started a discussion. RDO blog roundup, February 16, 2015 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/1002/rdo-blog-roundup-february-16-2015 Have a great day! From ihrachys at redhat.com Tue Feb 17 10:01:37 2015 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 17 Feb 2015 11:01:37 +0100 Subject: [Rdo-list] failed to flow_del In-Reply-To: References: Message-ID: <54E31181.4050305@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/16/2015 03:39 PM, Madko wrote: > Hi, > > we have a lot of flood in the logs about "failed to flow_del". > > /var/log/openvswitch/ovs-vswitchd.log-20150215:2015-02-13T17:43:54.555Z|00011|dpif(revalidator_7)|WARN|system at ovs-system: > > failed to flow_del (No such file or directory) > skb_priority(0),in_port(6),skb_mark(0),eth(src=00:1a:a0:28:ca:cc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=4001,pcp=0),encap(eth_type(0x0806),arp(sip=10.156.29.184,tip=10.156.20.110,op=1,sha=00:1a:a0:28:ca:cc,tha=00:00:00:00:00:00)) > > Seems ubuntu has same bug : > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1408972 > > We are using kernel 3.10.0-123.20.1.el7.x86_64 and > openvswitch-2.1.2-2.el7.centos.1.x86_64 > > Is it a known bug on RDO? Brief search thru the web shows that it's a known issue and was fixed in OVS as of https://github.com/openvswitch/ovs/commit/3601bd879 that is included in 2.1.3. I guess the commit was not backported to EL7 package, hence the error. You can report a bug against RDO to track the backport and/or version bump for OVS. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU4xF+AAoJEC5aWaUY1u57XcEH/jXVuW/rjkUe0Wiie5twK/sH 3gtAjsRctG9QQt3K4YqkzFGmZwgsyJ2GLn5tHRcPbg6cdFySWuSfjDnumiqi7fDN n7g4LmGyvbaYdf0JM295DYCGkTj2tgkJ4+uW/pbrQ2vG6itfLWvWKdbbRoEyFpiL PdaDZnHGFfIuvG5HQ1tK8DyKSUN/aUXSxQctXp81K+ltf71Ae+muH/WlWW3wvE2I EYr49oj/tKvp3qZDy/idCOEOoEEIedSxlXT3WzqyHn42RLFPUf4eOjQQ6RFNoPYx m+xfgKtAdFcvEV4/EuZn6UzY2vY9RtdchshCaLylU28GsDqybiC+PUzkDMjJK8M= =Mui0 -----END PGP SIGNATURE----- From madko77 at gmail.com Tue Feb 17 10:30:51 2015 From: madko77 at gmail.com (Madko) Date: Tue, 17 Feb 2015 10:30:51 +0000 Subject: [Rdo-list] failed to flow_del References: <54E31181.4050305@redhat.com> Message-ID: Thanks Ihar, I've just reported this as you suggested: https://bugzilla.redhat.com/show_bug.cgi?id=1193429 Le Tue Feb 17 2015 at 11:03:23, Ihar Hrachyshka a ?crit : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 02/16/2015 03:39 PM, Madko wrote: > > Hi, > > > > we have a lot of flood in the logs about "failed to flow_del". > > > > /var/log/openvswitch/ovs-vswitchd.log-20150215:2015-02- > 13T17:43:54.555Z|00011|dpif(revalidator_7)|WARN|system at ovs-system: > > > > > failed to flow_del (No such file or directory) > > skb_priority(0),in_port(6),skb_mark(0),eth(src=00:1a:a0: > 28:ca:cc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid= > 4001,pcp=0),encap(eth_type(0x0806),arp(sip=10.156.29.184, > tip=10.156.20.110,op=1,sha=00:1a:a0:28:ca:cc,tha=00:00:00:00:00:00)) > > > > Seems ubuntu has same bug : > > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1408972 > > > > We are using kernel 3.10.0-123.20.1.el7.x86_64 and > > openvswitch-2.1.2-2.el7.centos.1.x86_64 > > > > Is it a known bug on RDO? > > Brief search thru the web shows that it's a known issue and was fixed > in OVS as of https://github.com/openvswitch/ovs/commit/3601bd879 that > is included in 2.1.3. I guess the commit was not backported to EL7 > package, hence the error. > > You can report a bug against RDO to track the backport and/or version > bump for OVS. > > /Ihar > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJU4xF+AAoJEC5aWaUY1u57XcEH/jXVuW/rjkUe0Wiie5twK/sH > 3gtAjsRctG9QQt3K4YqkzFGmZwgsyJ2GLn5tHRcPbg6cdFySWuSfjDnumiqi7fDN > n7g4LmGyvbaYdf0JM295DYCGkTj2tgkJ4+uW/pbrQ2vG6itfLWvWKdbbRoEyFpiL > PdaDZnHGFfIuvG5HQ1tK8DyKSUN/aUXSxQctXp81K+ltf71Ae+muH/WlWW3wvE2I > EYr49oj/tKvp3qZDy/idCOEOoEEIedSxlXT3WzqyHn42RLFPUf4eOjQQ6RFNoPYx > m+xfgKtAdFcvEV4/EuZn6UzY2vY9RtdchshCaLylU28GsDqybiC+PUzkDMjJK8M= > =Mui0 > -----END PGP SIGNATURE----- > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Feb 17 16:20:03 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 17 Feb 2015 11:20:03 -0500 Subject: [Rdo-list] RDO bookmarks - Feedback requested Message-ID: <54E36A33.2020907@redhat.com> Some of you have seen the RDO bookmarks that I hand out at various events. They were designed 2 years ago by Dave Neary, and have been updated a few times since then. It's time for another refresh to reflect the changes in the OpenStack world. I'm also planning to remove the bit of the bookmark that lists project names and definitions, since that was already somewhat out of date, and is even more so given Thierry's blog post from yesterday. Anyways, I've put the latest version of the source file for the bookmark at https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt and would appreciate any feedback from any of you who have opinions regarding what should/should not be on there, what changes we need to make, and how we can generally make it more useful to users of RDO and OpenStack in general. Thanks! -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From jruzicka at redhat.com Tue Feb 17 16:21:18 2015 From: jruzicka at redhat.com (Jakub Ruzicka) Date: Tue, 17 Feb 2015 17:21:18 +0100 Subject: [Rdo-list] rdopkg overview In-Reply-To: <20150205154918.GD7971@redhat.com> References: <20150129214843.GG24719@redhat.com> <54CB8CB7.6030004@redhat.com> <20150130145325.GL24719@redhat.com> <20150205154918.GD7971@redhat.com> Message-ID: <54E36A7E.9050604@redhat.com> On 5.2.2015 16:49, Steve Linabery wrote: > On Fri, Jan 30, 2015 at 08:53:25AM -0600, Steve Linabery wrote: >> On Fri, Jan 30, 2015 at 02:52:55PM +0100, Jakub Ruzicka wrote: >>> Very nice overview Steve, thanks for writing this down! >>> >>> My random thoughts on the matter inline. >>> >>> On 29.1.2015 22:48, Steve Linabery wrote: >>>> I have been struggling with the amount of information to convey and what level of detail to include. Since I can't seem to get it perfect to my own satisfaction, here is the imperfect (and long, sorry) version to begin discussion. >>>> >>>> This is an overview of where things stand (rdopkg CI 'v0.1'). >>> >>> For some time, I'm wondering if we should really call it rdopkg CI since >>> it's not really tied to rdopkg but to RDO. You can use most of rdopkg on >>> any distgit. I reckon we should simply call it RDO CI to avoid >>> confusion. I for one don't underestimate the impact of naming stuff ;) >>> >>>> Terminology: >>>> 'Release' refers to an OpenStack release (e.g. havana,icehouse,juno) >>>> 'Dist' refers to a distro supported by RDO (e.g. fedora-20, epel-6, epel-7) >>>> 'phase1' is the initial smoketest for an update submitted via `rdopkg update` >>>> 'phase2' is a full-provision test for accumulated updates that have passed phase1 >>>> 'snapshot' means an OpenStack snapshot of a running instance, i.e. a disk image created from a running OS instance. >>>> >>>> The very broad strokes: >>>> ----------------------- >>>> >>>> rdopkg ci is triggered when a packager uses `rdopkg update`. >>>> >>>> When a review lands in the rdo-update gerrit project, a 'phase1' smoketest is initiated via jenkins for each Release/Dist combination present in the update (e.g. if the update contains builds for icehouse/fedora-20 and icehouse/epel-6, each set of RPMs from each build will be smoketested on an instance running the associated Release/Dist). If *all* supported builds from the update pass phase1, then the update is merged into rdo-update. Updates that pass phase1 accumulate in the updates/ directory in the rdo-update project. >>>> >>>> Periodically, a packager may run 'phase2'. This takes everything in updates/ and uses those RPMs + RDO production repo to provision a set of base images with packstack aio. Again, a simple tempest test is run against the packstack aio instances. If all pass, then phase2 passes, and the `rdopkg update` yaml files are moved from updates/ to ready/. >>>> >>>> At that point, someone with the keys to the stage repos will push the builds in ready/ to the stage repo. If CI against stage repo passes, stage is rsynced to production. >>>> >>>> Complexity, Part 1: >>>> ------------------- >>>> >>>> Rdopkg CI v0.1 was designed around the use of OpenStack VM disk snapshots. On a periodic basis, we provision two nodes for each supported combination in [Releases] X [Dists] (e.g. "icehouse, fedora-20" "juno, epel-7" etc). One node is a packstack aio instance built against RDO production repos, and the other is a node running tempest. After a simple tempest test passes for all the packstack aio nodes, we would snapshot the set of instances. Then when we want to do a 'phase1' test for e.g. "icehouse, fedora-20", we can spin up the instances previously snapshotted and save the time of re-running packstack aio. >>>> >>>> Using snapshots saves approximately 30 min of wait time per test run by skipping provisioning. Using snapshots imposes a few substantial costs/complexities though. First and most significant, snapshots need to be reinstantiated using the same IP addresses that were present when packstack and tempest were run during the provisioning. This means we have to have concurrency control around running only one phase1 run at a time; otherwise an instance might fail to provision because its 'static' IP address is already in use by another run. The second cost is that in practice, a) our OpenStack infrastructure has been unreliable, b) not all Release/Dist combinations reliably provision. So it becomes hard to create a full set of snapshots reliably. >>>> >>>> Additionally, some updates (e.g. when an update comes in for openstack-puppet-modules) prevent the use of a previously-provisioned packstack instance. Continuing with the o-p-m example: that package is used for provisioning. So simply updating the RPM for that package after running packstack aio doesn't tell us anything about the package sanity (other than perhaps if a new, unsatisfied RPM dependency was introduced). >>>> >>>> Another source of complexity comes from the nature of the rdopkg update 'unit'. Each yaml file created by `rdopkg update` can contain multiple builds for different Release,Dist combinations. So there must be a way to 'collate' the results of each smoketest for each Release,Dist and pass phase1 only if all updates pass. Furthermore, some combinations of Release,Dist are known (at times, for various ad hoc reasons) to fail testing, and those combinations sometimes need to be 'disabled'. For example, if we know that icehouse/f20 is 'red' on a given day, we might want an update containing icehouse/fedora-20,icehouse/epel-6 to test only the icehouse/epel-6 combination and pass if that passes. >>>> >>>> Finally, pursuant to the previous point, there need to be 'control' structure jobs for provision/snapshot, phase1, and phase2 runs that pass (and perform some action upon passing) only when all their 'child' jobs have passed. >>>> >>>> The way we have managed this complexity to date is through the use of the jenkins BuildFlow plugin. Here's some ASCII art (courtesy of 'tree') to show how the jobs are structured now (these are descriptive job names, not the actual jenkins job names). BuildFlow jobs are indicated by (bf). >>>> >>>> . >>>> `-- rdopkg_master_flow (bf) >>>> |-- provision_and_snapshot (bf) >>>> | |-- provision_and_snapshot_icehouse_epel6 >>>> | |-- provision_and_snapshot_icehouse_f20 >>>> | |-- provision_and_snapshot_juno_epel7 >>>> | `-- provision_and_snapshot_juno_f21 >>>> |-- phase1_flow (bf) >>>> | |-- phase1_test_icehouse_f20 >>>> | `-- phase1_test_juno_f21 >>>> `-- phase2_flow (bf) >>>> |-- phase2_test_icehouse_epel6 >>>> |-- phase2_test_icehouse_f20 >>>> |-- phase2_test_juno_epel7 >>>> `-- phase2_test_juno_f21 >>> >>> As a consumer of CI results, my main problem with this is it takes about >>> 7 clicks to get to the actual error. >>> >>> >>>> When a change comes in from `rdopkg update`, the rdopkg_master_flow job is triggered. It's the only job that gets triggered from gerrit, so it kicks off phase1_flow. phase1_flow runs 'child' jobs (normal jenkins jobs, not buildflow) for each Release,Dist combination present in the update. >>>> >>>> provision_and_snapshot is run by manually setting a build parameter (BUILD_SNAPS) in the rdopkg_master_flow job, and triggering the build of rdopkg_master_flow. >>>> >>>> phase2 is invoked similar to the provision_and_snapshot build, by checking 'RUN_PHASE2' in the rdopkg_master_flow build parameters before executing a build thereof. >>>> >>>> Concurrency control is a side effect of requiring the user or gerrit to execute rdopkg_master_flow for every action. There can be only one rdopkg_master_flow build executing at any given time. >>>> >>>> Complexity, Part 2: >>>> ------------------- >>>> >>>> In addition to the nasty complexity of using nested BuildFlow type jobs, each 'worker' job (i.e. the non-buildflow type jobs) has some built in complexity that is reflected in the amount of logic in each job's bash script definition. >>>> >>>> Some of this has been alluded to in previous points. For instance, each job in the phase1 flow needs to determine, for each update, if the update contains a package that requires full packstack aio provisioning from a base image (e.g. openstack-puppet-modules). This 'must provision' list needs to be stored somewhere that all jobs can read it, and it needs to be dynamic enough to add to it as requirements dictate. >>>> >>>> But additionally, for package sets not requiring provisioning a base image, phase1 job needs to query the backing OpenStack instance to see if there exists a 'known good' snapshot, get the images' UUIDs from OpenStack, and spin up the instances using the snapshot images. >>>> >>>> This baked-in complexity in the 'worker' jenkins jobs has made it difficult to maintain the job definitions, and more importantly difficult to run using jjb or in other more 'orthodox' CI-type ways. The rdopkg CI stuff is a bastard child of a fork. It lives in its own mutant gene pool. >>> >>> lolololol >>> >>> >>>> A Way Forward...? >>>> ---------------- >>>> >>>> Wes Hayutin had a good idea that might help reduce some of the complexity here as we contemplate a) making rdopkg CI public, b) moving toward rdopkg CI 0.2. >>>> >>>> His idea was a) stop using snapshots since the per-test-run savings doesn't seem to justify the burden they create, b) do away with BuildFlow by including the 'this update contains builds for (Release1,Dist2),...,(ReleaseN,DistM)' information in the gerrit change topic. >>> >>> It's easy to modify `rdopkg update` to include this information. >>> However, it's redundant so you can (in theory) submit an update where >>> this summary won't match the actual YAML data. That's probably >>> completely irrelevant, but I'm mentioning it nonetheless :) >>> >> >> This is less a response to Jakub's comment here and more an additional explanation of why this idea is so nice. >> >> Currently, when phase1_flow is triggered, it ssh's to a separate host to run `rdoupdate check` (because BuildFlow jobs execute on the jenkins master node, disregarding any setting to run them on a particular slave), and parse the output to determine what Release,Dist combinations need to be tested. >> >> The gerrit topic approach would allow us to have N jobs listening to gerrit trigger events, but e.g. the juno/epel-7 job would only execute if the gerrit topic matched that job's regexp. The gerrit review would only get its +2 when these jobs complete successfully. >> >> It would be nice to decide what that topic string ought to look like so that a) we have sanity in the regexp, b) we are sure gerrit will support long strings with whatever odd chars we may wish to use, etc. >> > > I'll propose a format for the gerrit topic string. Let's say a build includes updates for: > icehouse,fedora-20 > juno,fedora-21 > juno,epel-7 > > The resulting topic string would be: > icehouse_fedora-20/juno_fedora-21_epel-7 > > Then a job triggered off gerrit would have a regex like '.*$release[^/]+$dist.*' > > So for example, the icehouse/fedora-20 phase1 job would have '.*icehouse[^/]+fedora-20.*' which would match the gerrit topic, so that test would run. > > Jakub, could we have `rdopkg update` generate the topic string as indicated based on the contents of the update? Hey, I just pushed the requested change to rdopkg. It will be included in the next rdopkg release (0.25). https://github.com/redhat-openstack/rdopkg/commit/09bd0dfc9ed928123e94b07cd208a95da2fc3f4c Cheers Jakub From vedsarkushwaha at gmail.com Wed Feb 18 18:13:20 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Wed, 18 Feb 2015 23:43:20 +0530 Subject: [Rdo-list] (no subject) Message-ID: Can anyone explain me the meaning of below line taken from the page " https://openstack.redhat.com/Neutron_with_existing_external_network": "You need to recreate the public subnet with an allocation range outside of your external DHCP range and set the gateway to the default gateway of the external network. " My IP address is 10.16.37.222 and I'm on institute network with proxy. What if I give allocation range from the external DHCP? Also guide me some good links to setup the network of openstack. The default network I got after rdo installation is 172.24.4.0/24. -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbrooks at redhat.com Wed Feb 18 18:27:30 2015 From: jbrooks at redhat.com (Jason Brooks) Date: Wed, 18 Feb 2015 13:27:30 -0500 (EST) Subject: [Rdo-list] (no subject) In-Reply-To: References: Message-ID: <1947092576.18575674.1424284050762.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Vedsar Kushwaha" > To: "Rdo-list at redhat.com" > Sent: Wednesday, February 18, 2015 10:13:20 AM > Subject: [Rdo-list] (no subject) > > Can anyone explain me the meaning of below line taken from the page " > https://openstack.redhat.com/Neutron_with_existing_external_network": > > "You need to recreate the public subnet with an allocation range outside of > your external DHCP range and set the gateway to the default gateway of the > external network. " > > My IP address is 10.16.37.222 and I'm on institute network with proxy. > > What if I give allocation range from the external DHCP? If the range from your external dhcp and the range for your floating IPs overlap, you can end up w/ multiple machines getting the same IP. > > Also guide me some good links to setup the network of openstack. The > default network I got after rdo installation is 172.24.4.0/24. This may help you: http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ Regards, Jason > -- > Vedsar Kushwaha > M.Tech-Computational Science > Indian Institute of Science > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From jupiter.hce at gmail.com Thu Feb 19 05:40:49 2015 From: jupiter.hce at gmail.com (jupiter) Date: Thu, 19 Feb 2015 16:40:49 +1100 Subject: [Rdo-list] Install rdo on rack system Hardware and GPU Message-ID: Hi, I like to install rdo manually, and I find the Red Hat Enterprise Linux OpenStack Platform 5 document has very good description for manually installing openstack. Is it compatible to install rdo manually following instruction of that document on CentOS 7? Also, I like to install rdo in a rack hardware system with ceph storage and GPUs, has any one succeeded on rack system with GPU? Appreciate your advice to recommend cheap and good rack hardware. Thank you. Kind regards, - j -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at berendt.io Thu Feb 19 08:22:51 2015 From: christian at berendt.io (Christian Berendt) Date: Thu, 19 Feb 2015 09:22:51 +0100 Subject: [Rdo-list] Install rdo on rack system Hardware and GPU In-Reply-To: References: Message-ID: <54E59D5B.7040603@berendt.io> On 02/19/2015 06:40 AM, jupiter wrote: > Appreciate your advice to recommend cheap and good rack hardware. You should describe your use case in detail. Christian. From apevec at gmail.com Thu Feb 19 08:26:03 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 19 Feb 2015 09:26:03 +0100 Subject: [Rdo-list] RDO meeting - CANCELED Message-ID: Meeting is now on IRC Freenode #rdo every Wed 1500 UTC agenda at https://etherpad.openstack.org/p/RDO-Packaging -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: text/calendar Size: 1264 bytes Desc: not available URL: From christian at berendt.io Thu Feb 19 08:34:38 2015 From: christian at berendt.io (Christian Berendt) Date: Thu, 19 Feb 2015 09:34:38 +0100 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54E36A33.2020907@redhat.com> References: <54E36A33.2020907@redhat.com> Message-ID: <54E5A01E.5090706@berendt.io> On 02/17/2015 05:20 PM, Rich Bowen wrote: > Some of you have seen the RDO bookmarks that I hand out at various > events. They were designed 2 years ago by Dave Neary, and have been > updated a few times since then. It's time for another refresh to reflect > the changes in the OpenStack world. It is a great idea. > I'm also planning to remove the bit of the bookmark that lists project > names and definitions, since that was already somewhat out of date, and > is even more so given Thierry's blog post from yesterday. +1 The 'nova-manage service list' command should be replaced with 'nova service-list' or should be removed. I would prefer to remove it because it is not related to the topic 'Managing instances'. It would be nice to have the commands to manage/assign floating IP addresses. Maybe it should be mentioned how to create a new network and a new subnetwork with neutron. It should be 'glance image-list' (at the moment it is 'glance-image-list'). It should be 'packstack --allinone' (at the moment it is 'packstack -allinone'). It would be nice to list http://docs.openstack.org as a useful URL. Regarding trystack.org: Is it still not possible to use trystack.org without a Facebook account? Christian. From jupiter.hce at gmail.com Thu Feb 19 09:56:22 2015 From: jupiter.hce at gmail.com (jupiter) Date: Thu, 19 Feb 2015 20:56:22 +1100 Subject: [Rdo-list] Install rdo on rack system Hardware and GPU In-Reply-To: <54E59D5B.7040603@berendt.io> References: <54E59D5B.7040603@berendt.io> Message-ID: Thanks Alan and Christian. The use case is to set up a private cloud for a school research project, I want to setup it up manually because I want to learn it by practice. A rack system can be similar to Dell Red Hat Cloud Pilot Bundle with Ceph Storage hardware. Also I need GPUs for compute nodes for running CUDA, 3D and OpenGL applications. I have limited the budget and I'd like to know the price for a single cabinet rack system, hopefully it can be extended when I've got more budget. It may be possible to buy Dell pilot bundle hardware if the price is moderate, I've asked Dell price for more than a month, I've never received the price information. Anyway, has anyone known what kind of budget for buying a single cabinet rack hardware? Sorry for the off topic. Thank you and appreciate it. Kind regards, - j On 2/19/15, Christian Berendt wrote: > On 02/19/2015 06:40 AM, jupiter wrote: >> Appreciate your advice to recommend cheap and good rack hardware. > > You should describe your use case in detail. > > Christian. > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at gmail.com Thu Feb 19 10:40:24 2015 From: apevec at gmail.com (Alan Pevec) Date: Thu, 19 Feb 2015 11:40:24 +0100 Subject: [Rdo-list] [meeting] RDO packaging meeting minutes (2015-02-18) Message-ID: ======================================== #rdo: RDO packaging meeting (2015-02-18) ======================================== Meeting started by apevec at 15:01:47 UTC. The full logs are available at http://meetbot.fedoraproject.org/rdo/2015-02-18/rdo.2015-02-18-15.01.log.html . Meeting summary --------------- * roll-call (apevec, 15:02:03) * agenda at https://etherpad.openstack.org/p/RDO-Packaging (apevec, 15:02:15) * RDO update CI status (apevec, 15:04:49) * rdo-update internal and external events are now triggering phase1, should have legit results shortly after meeting (eggmaster, 15:06:37) * eggmaster will retrigger queued recent updates to send them through phase1 (eggmaster, 15:06:59) * LINK: https://prod-rdojenkins.rhcloud.com/ (apevec, 15:08:28) * ACTION: jruzicka to push rdopkg 0.25 (apevec, 15:08:39) * ACTION: apevec to move pending updates from internal gerrit to gerrithub rdo-update.git (apevec, 15:08:58) * Kilo Packstack/OPM status (apevec, 15:11:39) * current Delorean openstack-puppet-modules is building from all puppet modules master branches (apevec, 15:12:27) * which does not work with Packstack (apevec, 15:12:38) * ACTION: apevec is modifying build_rpm_opm.sh to build from redhat-openstack/OPM master branch (apevec, 15:13:06) * ACTION: gchamoul to create packstack kilo branch (apevec, 15:26:15) * ACTION: gchamoul to create packstack/opm kilo branches (gchamoul, 15:26:47) * ACTION: apevec will modify build_rpm_opm.sh to build from redhat-openstack/OPM master-patches (apevec, 15:33:04) * EL6 Juno status (apevec, 15:37:21) * number80 started working on clients (apevec, 15:38:31) * open floor (apevec, 15:41:06) * LINK: http://trunk.rdoproject.org/ is a Fedora test page. So we still need that DNS record updated, right? (rbowen, 15:46:27) * ACTION: rbowen will provide index.html landing page for trunk.rdoproject.org (rbowen, 15:49:28) * LINK: https://etherpad.openstack.org/p/RDO-Trunk (apevec, 15:51:35) Meeting ended at 15:55:45 UTC. Action Items ------------ * jruzicka to push rdopkg 0.25 * apevec to move pending updates from internal gerrit to gerrithub rdo-update.git * apevec is modifying build_rpm_opm.sh to build from redhat-openstack/OPM master branch * gchamoul to create packstack kilo branch * gchamoul to create packstack/opm kilo branches * apevec will modify build_rpm_opm.sh to build from redhat-openstack/OPM master-patches * rbowen will provide index.html landing page for trunk.rdoproject.org Action Items, by person ----------------------- * apevec * apevec to move pending updates from internal gerrit to gerrithub rdo-update.git * apevec will modify build_rpm_opm.sh to build from redhat-openstack/OPM master-patches * gchamoul * gchamoul to create packstack/opm kilo branches [op.ed. not yet, master branches will be used for Kilo] * jruzicka * jruzicka to push rdopkg 0.25 * rbowen * rbowen will provide index.html landing page for trunk.rdoproject.org People Present (lines said) --------------------------- * apevec (112) * gchamoul (27) * rbowen (13) * eggmaster (13) * number80 (13) * ihrachyshka (4) * derekh (4) * zodbot (3) * jruzicka (3) * Rodrigo_US (2) * ryansb (1) * panda (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From vedsarkushwaha at gmail.com Thu Feb 19 13:13:32 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Thu, 19 Feb 2015 18:43:32 +0530 Subject: [Rdo-list] (no subject) In-Reply-To: <1947092576.18575674.1424284050762.JavaMail.zimbra@redhat.com> References: <1947092576.18575674.1424284050762.JavaMail.zimbra@redhat.com> Message-ID: Hi Jason I followed the link http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ And it struck at the command: neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=10.16.37.225,end=10.16.37.254 --gateway=10.16.37.1 public 10.16.37.0/27 Bad Request (HTTP 400) (Request-ID: req-668832aa-67a1-4b0d-ad6e-7a2c8aae905f) Can you please guide me, what should I do next? On Wed, Feb 18, 2015 at 11:57 PM, Jason Brooks wrote: > > > ----- Original Message ----- > > From: "Vedsar Kushwaha" > > To: "Rdo-list at redhat.com" > > Sent: Wednesday, February 18, 2015 10:13:20 AM > > Subject: [Rdo-list] (no subject) > > > > Can anyone explain me the meaning of below line taken from the page " > > https://openstack.redhat.com/Neutron_with_existing_external_network": > > > > "You need to recreate the public subnet with an allocation range outside > of > > your external DHCP range and set the gateway to the default gateway of > the > > external network. " > > > > My IP address is 10.16.37.222 and I'm on institute network with proxy. > > > > What if I give allocation range from the external DHCP? > > If the range from your external dhcp and the range for your floating IPs > overlap, you can end up w/ multiple machines getting the same IP. > > > > > Also guide me some good links to setup the network of openstack. The > > default network I got after rdo installation is 172.24.4.0/24. > > This may help you: > > > http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ > > Regards, Jason > > > > -- > > Vedsar Kushwaha > > M.Tech-Computational Science > > Indian Institute of Science > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedsarkushwaha at gmail.com Thu Feb 19 13:23:08 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Thu, 19 Feb 2015 18:53:08 +0530 Subject: [Rdo-list] (no subject) In-Reply-To: References: <1947092576.18575674.1424284050762.JavaMail.zimbra@redhat.com> Message-ID: I changed neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=10.16.37.225,end=10.16. 37.254 --gateway=10.16.37.1 public 10.16.37.0/27 to neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=10.16.37.225,end=10.16. 37.254 --gateway=10.16.37.1 public 10.16.37.0/24 and successfully created. But I'm still not able to ping to instance. I launched an instance using demo user. Defined custom TCP and custom ICMP rule in security policy. Please help... On Thu, Feb 19, 2015 at 6:43 PM, Vedsar Kushwaha wrote: > Hi Jason > I followed the link > > http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ > > And it struck at the command: > neutron subnet-create --name public_subnet --enable_dhcp=False > --allocation_pool start=10.16.37.225,end=10.16.37.254 --gateway=10.16.37.1 > public 10.16.37.0/27 > > Bad Request (HTTP 400) (Request-ID: > req-668832aa-67a1-4b0d-ad6e-7a2c8aae905f) > > Can you please guide me, what should I do next? > > On Wed, Feb 18, 2015 at 11:57 PM, Jason Brooks wrote: > >> >> >> ----- Original Message ----- >> > From: "Vedsar Kushwaha" >> > To: "Rdo-list at redhat.com" >> > Sent: Wednesday, February 18, 2015 10:13:20 AM >> > Subject: [Rdo-list] (no subject) >> > >> > Can anyone explain me the meaning of below line taken from the page " >> > https://openstack.redhat.com/Neutron_with_existing_external_network": >> > >> > "You need to recreate the public subnet with an allocation range >> outside of >> > your external DHCP range and set the gateway to the default gateway of >> the >> > external network. " >> > >> > My IP address is 10.16.37.222 and I'm on institute network with proxy. >> > >> > What if I give allocation range from the external DHCP? >> >> If the range from your external dhcp and the range for your floating IPs >> overlap, you can end up w/ multiple machines getting the same IP. >> >> > >> > Also guide me some good links to setup the network of openstack. The >> > default network I got after rdo installation is 172.24.4.0/24. >> >> This may help you: >> >> >> http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ >> >> Regards, Jason >> >> >> > -- >> > Vedsar Kushwaha >> > M.Tech-Computational Science >> > Indian Institute of Science >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > -- > Vedsar Kushwaha > M.Tech-Computational Science > Indian Institute of Science > -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedsarkushwaha at gmail.com Thu Feb 19 13:45:31 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Thu, 19 Feb 2015 19:15:31 +0530 Subject: [Rdo-list] (no subject) In-Reply-To: References: <1947092576.18575674.1424284050762.JavaMail.zimbra@redhat.com> Message-ID: I changed neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=10.16.37.225,end=10.16. 37.254 --gateway=10.16.37.1 public 10.16.37.0/27 to neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=10.16.37.225,end=10.16. 37.254 --gateway=10.16.37.1 public 10.16.37.0/24 and it successfully created. But I'm still not able to ping to instance. I launched an instance using demo user. Defined custom TCP and custom ICMP rule in security policy. Please help... The error is [root at localhost ~]# ping 10.16.37.228 PING 10.16.37.228 (10.16.37.228) 56(84) bytes of data. >From 10.16.37.222 icmp_seq=1 Destination Host Unreachable 10.16.37.228 is the floating IP address of instance. On Thu, Feb 19, 2015 at 6:53 PM, Vedsar Kushwaha wrote: > I changed > > neutron subnet-create --name public_subnet --enable_dhcp=False > --allocation_pool start=10.16.37.225,end=10.16. > 37.254 --gateway=10.16.37.1 public 10.16.37.0/27 > > to > > neutron subnet-create --name public_subnet --enable_dhcp=False > --allocation_pool start=10.16.37.225,end=10.16. > 37.254 --gateway=10.16.37.1 public 10.16.37.0/24 > > and successfully created. > > But I'm still not able to ping to instance. I launched an instance using > demo user. Defined custom TCP and custom ICMP rule in security policy. > > Please help... > > > On Thu, Feb 19, 2015 at 6:43 PM, Vedsar Kushwaha > wrote: > >> Hi Jason >> I followed the link >> >> http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ >> >> And it struck at the command: >> neutron subnet-create --name public_subnet --enable_dhcp=False >> --allocation_pool start=10.16.37.225,end=10.16.37.254 --gateway=10.16.37.1 >> public 10.16.37.0/27 >> >> Bad Request (HTTP 400) (Request-ID: >> req-668832aa-67a1-4b0d-ad6e-7a2c8aae905f) >> >> Can you please guide me, what should I do next? >> >> On Wed, Feb 18, 2015 at 11:57 PM, Jason Brooks >> wrote: >> >>> >>> >>> ----- Original Message ----- >>> > From: "Vedsar Kushwaha" >>> > To: "Rdo-list at redhat.com" >>> > Sent: Wednesday, February 18, 2015 10:13:20 AM >>> > Subject: [Rdo-list] (no subject) >>> > >>> > Can anyone explain me the meaning of below line taken from the page " >>> > https://openstack.redhat.com/Neutron_with_existing_external_network": >>> > >>> > "You need to recreate the public subnet with an allocation range >>> outside of >>> > your external DHCP range and set the gateway to the default gateway of >>> the >>> > external network. " >>> > >>> > My IP address is 10.16.37.222 and I'm on institute network with proxy. >>> > >>> > What if I give allocation range from the external DHCP? >>> >>> If the range from your external dhcp and the range for your floating IPs >>> overlap, you can end up w/ multiple machines getting the same IP. >>> >>> > >>> > Also guide me some good links to setup the network of openstack. The >>> > default network I got after rdo installation is 172.24.4.0/24. >>> >>> This may help you: >>> >>> >>> http://community.redhat.com/blog/2015/01/rdo-quickstart-doing-the-neutron-dance/ >>> >>> Regards, Jason >>> >>> >>> > -- >>> > Vedsar Kushwaha >>> > M.Tech-Computational Science >>> > Indian Institute of Science >>> > >>> > _______________________________________________ >>> > Rdo-list mailing list >>> > Rdo-list at redhat.com >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> >> >> -- >> Vedsar Kushwaha >> M.Tech-Computational Science >> Indian Institute of Science >> > > > > -- > Vedsar Kushwaha > M.Tech-Computational Science > Indian Institute of Science > -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at rhcloud.com Thu Feb 19 18:14:40 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Thu, 19 Feb 2015 13:14:40 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 42 - Failure! Message-ID: <20755174.11.1424369681008.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 42 - Failure: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/42/ to view the results. From no-reply at rhcloud.com Fri Feb 20 15:31:12 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Fri, 20 Feb 2015 10:31:12 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 43 - Failure! Message-ID: <33486292.13.1424446272625.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 43 - Failure: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/43/ to view the results. From pasquale.salza at gmail.com Fri Feb 20 17:11:35 2015 From: pasquale.salza at gmail.com (Pasquale Salza) Date: Fri, 20 Feb 2015 18:11:35 +0100 Subject: [Rdo-list] I can't get access to VM instances Message-ID: Hi there, I have a lot of problems with RDO/OpenStack configuration. Firstly, I need to describe my network situation. I have 7 machine, each of them with 2 NIC. I would like to use one machine as a controller/network node and the others as compute nodes. I would like to use the eth0 to connect nodes to internet (and get access by remote sessions) with the network "172.16.58.0/24", in which I have just 7 available IPs, and eth1 as configuration network on the network 10.42.100.0/42. This is my current configuration, for each node (varying the IPs on each machine): eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR=172.16.58.50 NETMASK=255.255.255.0 GATEWAY=172.16.58.254 DNS1=172.16.58.50 DOMAIN=### DEFROUTE="yes" eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 ONBOOT=yes I'd like to have instances on 10.42.200.0/24 virtual private network and the remaining IPs of 10.42.100.0/24 network as floating IPs. These are the relevant parts of my answers.txt file: CONFIG_CONTROLLER_HOST=10.42.100.1 CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 CONFIG_NETWORK_HOSTS=10.42.100.1 CONFIG_AMQP_HOST=10.42.100.1 CONFIG_MARIADB_HOST=10.42.100.1 CONFIG_NOVA_COMPUTE_PRIVIF=eth1 CONFIG_NOVA_NETWORK_PUBIF=eth1 CONFIG_NOVA_NETWORK_PRIVIF=eth1 CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= CONFIG_NEUTRON_OVS_BRIDGE_IFACES= CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 After the installation, I configure the network like this: neutron router-create router neutron net-create private neutron subnet-create private 10.42.200.0/24 --name private-subnet neutron router-interface-add router private-subnet neutron net-create public --router:external=True neutron subnet-create public 10.42.100.0/24 --name public-subnet --enable_dhcp=False --allocation-pool start=10.42.100.100,end=10.42.100.200 --no-gateway neutron router-gateway-set router public I'm able to launch instances but I can't get access (ping/ssh) to them. I don't know if I'm doing something wrong starting from planning. Please, help me! -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Fri Feb 20 17:30:06 2015 From: roxenham at redhat.com (Rhys Oxenham) Date: Fri, 20 Feb 2015 17:30:06 +0000 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: References: Message-ID: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> Hi Pasquale, Did you modify your security group rules to allow ICMP and/or 22:tcp access? Many thanks Rhys > On 20 Feb 2015, at 17:11, Pasquale Salza wrote: > > Hi there, I have a lot of problems with RDO/OpenStack configuration. Firstly, I need to describe my network situation. > > I have 7 machine, each of them with 2 NIC. I would like to use one machine as a controller/network node and the others as compute nodes. > > I would like to use the eth0 to connect nodes to internet (and get access by remote sessions) with the network "172.16.58.0/24", in which I have just 7 available IPs, and eth1 as configuration network on the network 10.42.100.0/42. > > This is my current configuration, for each node (varying the IPs on each machine): > > eth0: > DEVICE=eth0 > TYPE=Ethernet > ONBOOT=yes > BOOTPROTO=static > IPADDR=172.16.58.50 > NETMASK=255.255.255.0 > GATEWAY=172.16.58.254 > DNS1=172.16.58.50 > DOMAIN=### > DEFROUTE="yes" > > eth1: > DEVICE=eth1 > TYPE=OVSPort > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > ONBOOT=yes > > br-ex: > DEVICE=br-ex > DEVICETYPE=ovs > TYPE=OVSBridge > BOOTPROTO=static > IPADDR=10.42.100.1 > NETMASK=255.255.255.0 > ONBOOT=yes > > I'd like to have instances on 10.42.200.0/24 virtual private network and the remaining IPs of 10.42.100.0/24 network as floating IPs. > > These are the relevant parts of my answers.txt file: > > CONFIG_CONTROLLER_HOST=10.42.100.1 > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > CONFIG_NETWORK_HOSTS=10.42.100.1 > CONFIG_AMQP_HOST=10.42.100.1 > CONFIG_MARIADB_HOST=10.42.100.1 > CONFIG_NOVA_COMPUTE_PRIVIF=eth1 > CONFIG_NOVA_NETWORK_PUBIF=eth1 > CONFIG_NOVA_NETWORK_PRIVIF=eth1 > CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex > CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan > CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan > CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 > CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= > CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= > CONFIG_NEUTRON_OVS_BRIDGE_IFACES= > CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 > > After the installation, I configure the network like this: > > neutron router-create router > neutron net-create private > neutron subnet-create private 10.42.200.0/24 --name private-subnet > neutron router-interface-add router private-subnet > neutron net-create public --router:external=True > neutron subnet-create public 10.42.100.0/24 --name public-subnet --enable_dhcp=False --allocation-pool start=10.42.100.100,end=10.42.100.200 --no-gateway > neutron router-gateway-set router public > > I'm able to launch instances but I can't get access (ping/ssh) to them. > > I don't know if I'm doing something wrong starting from planning. > > Please, help me! > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From no-reply at rhcloud.com Fri Feb 20 18:08:48 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Fri, 20 Feb 2015 13:08:48 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 44 - Still Failing! In-Reply-To: <33486292.13.1424446272625.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> References: <33486292.13.1424446272625.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> Message-ID: <7878449.15.1424455729073.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 44 - Still Failing: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/44/ to view the results. From no-reply at rhcloud.com Fri Feb 20 18:58:57 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Fri, 20 Feb 2015 13:58:57 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 45 - Failure! Message-ID: <24422672.17.1424458737597.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 45 - Failure: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/45/ to view the results. From no-reply at rhcloud.com Fri Feb 20 20:52:44 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Fri, 20 Feb 2015 15:52:44 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 46 - Still Failing! In-Reply-To: <24422672.17.1424458737597.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> References: <24422672.17.1424458737597.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> Message-ID: <8537744.19.1424465564810.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 46 - Still Failing: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/46/ to view the results. From pasquale.salza at gmail.com Fri Feb 20 22:07:02 2015 From: pasquale.salza at gmail.com (Pasquale Salza) Date: Fri, 20 Feb 2015 23:07:02 +0100 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> Message-ID: Hi Rhys, I suppose so, because these are my iptables rules: iptables -F iptables -t nat -F iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -A INPUT -d 172.16.58.0/24 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport ssh -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport www -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport pptp -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p tcp --sport domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p udp --sport domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p udp --dport domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p gre -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -p icmp -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 -j DROP iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE service iptables save Firstly, do you think I planned the network organisation well? Do you have other suggestion (best practices) with 2 interfaces? 2015-02-20 18:30 GMT+01:00 Rhys Oxenham : > Hi Pasquale, > > Did you modify your security group rules to allow ICMP and/or 22:tcp > access? > > Many thanks > Rhys > > > On 20 Feb 2015, at 17:11, Pasquale Salza > wrote: > > > > Hi there, I have a lot of problems with RDO/OpenStack configuration. > Firstly, I need to describe my network situation. > > > > I have 7 machine, each of them with 2 NIC. I would like to use one > machine as a controller/network node and the others as compute nodes. > > > > I would like to use the eth0 to connect nodes to internet (and get > access by remote sessions) with the network "172.16.58.0/24", in which I > have just 7 available IPs, and eth1 as configuration network on the network > 10.42.100.0/42. > > > > This is my current configuration, for each node (varying the IPs on each > machine): > > > > eth0: > > DEVICE=eth0 > > TYPE=Ethernet > > ONBOOT=yes > > BOOTPROTO=static > > IPADDR=172.16.58.50 > > NETMASK=255.255.255.0 > > GATEWAY=172.16.58.254 > > DNS1=172.16.58.50 > > DOMAIN=### > > DEFROUTE="yes" > > > > eth1: > > DEVICE=eth1 > > TYPE=OVSPort > > DEVICETYPE=ovs > > OVS_BRIDGE=br-ex > > ONBOOT=yes > > > > br-ex: > > DEVICE=br-ex > > DEVICETYPE=ovs > > TYPE=OVSBridge > > BOOTPROTO=static > > IPADDR=10.42.100.1 > > NETMASK=255.255.255.0 > > ONBOOT=yes > > > > I'd like to have instances on 10.42.200.0/24 virtual private network > and the remaining IPs of 10.42.100.0/24 network as floating IPs. > > > > These are the relevant parts of my answers.txt file: > > > > CONFIG_CONTROLLER_HOST=10.42.100.1 > > > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > CONFIG_NETWORK_HOSTS=10.42.100.1 > > CONFIG_AMQP_HOST=10.42.100.1 > > CONFIG_MARIADB_HOST=10.42.100.1 > > CONFIG_NOVA_COMPUTE_PRIVIF=eth1 > > CONFIG_NOVA_NETWORK_PUBIF=eth1 > > CONFIG_NOVA_NETWORK_PRIVIF=eth1 > > CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > > CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > > CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex > > CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan > > CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan > > CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 > > CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= > > CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= > > CONFIG_NEUTRON_OVS_BRIDGE_IFACES= > > CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 > > > > After the installation, I configure the network like this: > > > > neutron router-create router > > neutron net-create private > > neutron subnet-create private 10.42.200.0/24 --name private-subnet > > neutron router-interface-add router private-subnet > > neutron net-create public --router:external=True > > neutron subnet-create public 10.42.100.0/24 --name public-subnet > --enable_dhcp=False --allocation-pool start=10.42.100.100,end=10.42.100.200 > --no-gateway > > neutron router-gateway-set router public > > > > I'm able to launch instances but I can't get access (ping/ssh) to them. > > > > I don't know if I'm doing something wrong starting from planning. > > > > Please, help me! > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- Pasquale Salza e-mail: pasquale.salza at gmail.com phone: +39 393 4415978 fax: +39 089 8422939 skype: pasquale.salza linkedin: http://it.linkedin.com/in/psalza/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Fri Feb 20 22:19:49 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 20 Feb 2015 14:19:49 -0800 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> Message-ID: <54E7B305.9030102@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/20/2015 02:07 PM, Pasquale Salza wrote: > Hi Rhys, I suppose so, because these are my iptables rules: > > iptables -F iptables -t nat -F iptables -P INPUT ACCEPT iptables -P > OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -A INPUT -d > 172.16.58.0/24 -m state --state > ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p tcp --dport ssh -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p tcp --dport www > -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p tcp --dport pptp -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p tcp --sport > domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p tcp --dport domain -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p udp --sport > domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p udp --dport domain -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p gre -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p icmp > -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -j DROP iptables -t nat -A POSTROUTING -o > eth0 -j MASQUERADE service iptables save > > Firstly, do you think I planned the network organisation well? Do > you have other suggestion (best practices) with 2 interfaces? > > > 2015-02-20 18:30 GMT+01:00 Rhys Oxenham >: > > Hi Pasquale, > > Did you modify your security group rules to allow ICMP and/or > 22:tcp access? > > Many thanks Rhys > >> On 20 Feb 2015, at 17:11, Pasquale Salza >> > wrote: >> >> Hi there, I have a lot of problems with RDO/OpenStack > configuration. Firstly, I need to describe my network situation. >> >> I have 7 machine, each of them with 2 NIC. I would like to use >> one > machine as a controller/network node and the others as compute > nodes. >> >> I would like to use the eth0 to connect nodes to internet (and >> get > access by remote sessions) with the network "172.16.58.0/24 > ", in which I have just 7 available IPs, > and eth1 as configuration network on the network 10.42.100.0/42 > . >> >> This is my current configuration, for each node (varying the IPs > on each machine): >> >> eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static >> IPADDR=172.16.58.50 NETMASK=255.255.255.0 GATEWAY=172.16.58.254 >> DNS1=172.16.58.50 DOMAIN=### DEFROUTE="yes" >> >> eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex >> ONBOOT=yes >> >> br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge >> BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 >> ONBOOT=yes >> >> I'd like to have instances on 10.42.200.0/24 > virtual private network and the remaining > IPs of 10.42.100.0/24 network as floating > IPs. >> >> These are the relevant parts of my answers.txt file: >> >> CONFIG_CONTROLLER_HOST=10.42.100.1 >> > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > CONFIG_NETWORK_HOSTS=10.42.100.1 >> CONFIG_AMQP_HOST=10.42.100.1 CONFIG_MARIADB_HOST=10.42.100.1 >> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 CONFIG_NOVA_NETWORK_PUBIF=eth1 >> CONFIG_NOVA_NETWORK_PRIVIF=eth1 >> CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > >> CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > >> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex >> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan >> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan >> CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 >> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= >> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= >> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= >> CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 >> >> After the installation, I configure the network like this: >> >> neutron router-create router neutron net-create private neutron >> subnet-create private 10.42.200.0/24 > --name private-subnet >> neutron router-interface-add router private-subnet neutron >> net-create public --router:external=True neutron subnet-create >> public 10.42.100.0/24 > --name public-subnet --enable_dhcp=False > --allocation-pool start=10.42.100.100,end=10.42.100.200 > --no-gateway >> neutron router-gateway-set router public >> >> I'm able to launch instances but I can't get access (ping/ssh) >> to > them. >> >> I don't know if I'm doing something wrong starting from >> planning. >> >> Please, help me! >> >> _______________________________________________ Rdo-list mailing >> list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- Pasquale Salza > > e-mail: pasquale.salza at gmail.com > phone: +39 393 4415978 fax: +39 089 8422939 skype: pasquale.salza > linkedin: http://it.linkedin.com/in/psalza/ > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Those look like the iptables rule on the hypervisor. Rhys is talking about the Neutron security group rules. By default, ssh into VMs is not allowed. You need to permit ICMP and SSH in the security rules on the neutron network. I don't see anything wrong with your network architecture at first glance, but floating IPs can be tricky at first. Start with basic VM-to-VM connectivity and add on from there. Good luck! - -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | @dxs on twitter -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU57MFAAoJEFkV3ypsGNbjlrcIAMc+Bp39+BIEhNm7rDjZZ/4m wcf/ti9vmeMuCyjTAwRUIHUO1l5ZnhoBLh6vdZaPXABEvC1bFT5U7V2Jeyt1z207 1kRrPxUV5mto5/NLOVJIvxR5qKdDGS0O7QPus9ZNeIWEIwQ/gmpqfm6I3PrQUOlq dqTVAUt5FoKCtPrGilbjX/6m5NEYa9kPO2vsr9C1OTfa9VYEn4LfUlHaQYDg7g/Q 1TQWlvWiMiHGYTzMqsWQdEb/CQosRfc2+Mf5eqO9Ah5CWrVZx14dDL8gQd1vfLGr sl3ByfVLwBTv3NiVtd1E+E4yOGceOoQ0xn0ysN30DhGxZlfob9ApV6m8PhMdeek= =vacc -----END PGP SIGNATURE----- From whayutin at redhat.com Fri Feb 20 22:34:13 2015 From: whayutin at redhat.com (whayutin) Date: Fri, 20 Feb 2015 17:34:13 -0500 Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 46 - Still Failing! In-Reply-To: <8537744.19.1424465564810.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> References: <24422672.17.1424458737597.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> <8537744.19.1424465564810.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> Message-ID: <1424471653.3724.13.camel@redhat.com> On Fri, 2015-02-20 at 15:52 -0500, address not configured yet wrote: > khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 46 - Still Failing: > > Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/46/ to view the results. > _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com FYI.. changed this to spam only on the first failure. There are some technical issues on trystack atm. From pasquale.salza at gmail.com Fri Feb 20 23:29:06 2015 From: pasquale.salza at gmail.com (Pasquale Salza) Date: Sat, 21 Feb 2015 00:29:06 +0100 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: <54E7B305.9030102@redhat.com> References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> <54E7B305.9030102@redhat.com> Message-ID: Whops! I figured out just few seconds after I sent the mail! Ok, tomorrow I'll try with it. :) I'd like to share how I want to organise my network in order to get some advices. Let's say I have 7 machines and 7 spare IPs on the network 172.16.58.0/24 which are also associated to 7 public (internet) IPs. I'd like to reserve 6 IPs for 6 VMs I could instanciate on OpenStack. So I planned to do this: the controller node has a static IP on eth0 of the 7 in 172.16.58.50/24 network so as I can access it from outside. I add an alias eth0:0 with which I connect the controller to the Management network of OpenStack, the 10.0.1.0/24 network. Also on the controller, I set statically the IP for eth1 with one of float IPs network 192.168.0.0/16 network. With iptables, I add the rule of forwarding everithing on eth0 and eth1, so the other nodes can get Internet access on network 10.0.1.0/24. On the compute nodes I set eth0 as one of IPs on 10.0.1.0/24 management network and eth1 as one on 192.168.0.0/16. Om each node I put the bridge on eth1. With RDO I put virtualisation and tunneling only on eth1. When the installatation has finished, I create a private neutron network 10.100.0.0/16 and two public networks of floating IPs. The first is 192.168.0.0/24 for any kind of VM. The other is the 172.16.58.0/24 network, limited to the 6 available IPs with which I can put virtual machines on Internet. Does it make sense or I'm doing some mistakes? Do you have any other idea? Thank you very much indeed! Pasquale -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/20/2015 02:07 PM, Pasquale Salza wrote: > Hi Rhys, I suppose so, because these are my iptables rules: > > iptables -F iptables -t nat -F iptables -P INPUT ACCEPT iptables -P > OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -A INPUT -d > 172.16.58.0/24 -m state --state > ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p tcp --dport ssh -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p tcp --dport www > -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p tcp --dport pptp -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p tcp --sport > domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p tcp --dport domain -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p udp --sport > domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -p udp --dport domain -j ACCEPT iptables -A > INPUT -d 172.16.58.0/24 -p gre -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p icmp > -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > -j DROP iptables -t nat -A POSTROUTING -o > eth0 -j MASQUERADE service iptables save > > Firstly, do you think I planned the network organisation well? Do > you have other suggestion (best practices) with 2 interfaces? > > > 2015-02-20 18:30 GMT+01:00 Rhys Oxenham >: > > Hi Pasquale, > > Did you modify your security group rules to allow ICMP and/or > 22:tcp access? > > Many thanks Rhys > >> On 20 Feb 2015, at 17:11, Pasquale Salza >> > wrote: >> >> Hi there, I have a lot of problems with RDO/OpenStack > configuration. Firstly, I need to describe my network situation. >> >> I have 7 machine, each of them with 2 NIC. I would like to use >> one > machine as a controller/network node and the others as compute > nodes. >> >> I would like to use the eth0 to connect nodes to internet (and >> get > access by remote sessions) with the network "172.16.58.0/24 > ", in which I have just 7 available IPs, > and eth1 as configuration network on the network 10.42.100.0/42 > . >> >> This is my current configuration, for each node (varying the IPs > on each machine): >> >> eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static >> IPADDR=172.16.58.50 NETMASK=255.255.255.0 GATEWAY=172.16.58.254 >> DNS1=172.16.58.50 DOMAIN=### DEFROUTE="yes" >> >> eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex >> ONBOOT=yes >> >> br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge >> BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 >> ONBOOT=yes >> >> I'd like to have instances on 10.42.200.0/24 > virtual private network and the remaining > IPs of 10.42.100.0/24 network as floating > IPs. >> >> These are the relevant parts of my answers.txt file: >> >> CONFIG_CONTROLLER_HOST=10.42.100.1 >> > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > CONFIG_NETWORK_HOSTS=10.42.100.1 >> CONFIG_AMQP_HOST=10.42.100.1 CONFIG_MARIADB_HOST=10.42.100.1 >> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 CONFIG_NOVA_NETWORK_PUBIF=eth1 >> CONFIG_NOVA_NETWORK_PRIVIF=eth1 >> CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > >> CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > >> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex >> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan >> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan >> CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 >> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= >> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= >> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= >> CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 >> >> After the installation, I configure the network like this: >> >> neutron router-create router neutron net-create private neutron >> subnet-create private 10.42.200.0/24 > --name private-subnet >> neutron router-interface-add router private-subnet neutron >> net-create public --router:external=True neutron subnet-create >> public 10.42.100.0/24 > --name public-subnet --enable_dhcp=False > --allocation-pool start=10.42.100.100,end=10.42.100.200 > --no-gateway >> neutron router-gateway-set router public >> >> I'm able to launch instances but I can't get access (ping/ssh) >> to > them. >> >> I don't know if I'm doing something wrong starting from >> planning. >> >> Please, help me! >> >> _______________________________________________ Rdo-list mailing >> list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- Pasquale Salza > > e-mail: pasquale.salza at gmail.com > phone: +39 393 4415978 fax: +39 089 8422939 skype: pasquale.salza > linkedin: http://it.linkedin.com/in/psalza/ > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Those look like the iptables rule on the hypervisor. Rhys is talking about the Neutron security group rules. By default, ssh into VMs is not allowed. You need to permit ICMP and SSH in the security rules on the neutron network. I don't see anything wrong with your network architecture at first glance, but floating IPs can be tricky at first. Start with basic VM-to-VM connectivity and add on from there. Good luck! - -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | @dxs on twitter -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU57MFAAoJEFkV3ypsGNbjlrcIAMc+Bp39+BIEhNm7rDjZZ/4m wcf/ti9vmeMuCyjTAwRUIHUO1l5ZnhoBLh6vdZaPXABEvC1bFT5U7V2Jeyt1z207 1kRrPxUV5mto5/NLOVJIvxR5qKdDGS0O7QPus9ZNeIWEIwQ/gmpqfm6I3PrQUOlq dqTVAUt5FoKCtPrGilbjX/6m5NEYa9kPO2vsr9C1OTfa9VYEn4LfUlHaQYDg7g/Q 1TQWlvWiMiHGYTzMqsWQdEb/CQosRfc2+Mf5eqO9Ah5CWrVZx14dDL8gQd1vfLGr sl3ByfVLwBTv3NiVtd1E+E4yOGceOoQ0xn0ysN30DhGxZlfob9ApV6m8PhMdeek= =vacc -----END PGP SIGNATURE----- _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Sat Feb 21 00:35:35 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Fri, 20 Feb 2015 16:35:35 -0800 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> <54E7B305.9030102@redhat.com> Message-ID: <54E7D2D7.8070203@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/20/2015 03:29 PM, Pasquale Salza wrote: > Whops! I figured out just few seconds after I sent the mail! Ok, > tomorrow I'll try with it. :) I'd like to share how I want to > organise my network in order to get some advices. > > Let's say I have 7 machines and 7 spare IPs on the network > 172.16.58.0/24 which are also associated to > 7 public (internet) IPs. > > I'd like to reserve 6 IPs for 6 VMs I could instanciate on > OpenStack. > > So I planned to do this: the controller node has a static IP on > eth0 of the 7 in 172.16.58.50/24 network > so as I can access it from outside. I add an alias eth0:0 with > which I connect the controller to the Management network of > OpenStack, the 10.0.1.0/24 network. Also on > the controller, I set statically the IP for eth1 with one of float > IPs network 192.168.0.0/16 network. With > iptables, I add the rule of forwarding everithing on eth0 and > eth1, so the other nodes can get Internet access on network > 10.0.1.0/24 . > > On the compute nodes I set eth0 as one of IPs on 10.0.1.0/24 > management network and eth1 as one on > 192.168.0.0/16 . > > Om each node I put the bridge on eth1. > > With RDO I put virtualisation and tunneling only on eth1. > > When the installatation has finished, I create a private neutron > network 10.100.0.0/16 and two public > networks of floating IPs. The first is 192.168.0.0/24 > for any kind of VM. The other is the > 172.16.58.0/24 network, limited to the 6 > available IPs with which I can put virtual machines on Internet. > > Does it make sense or I'm doing some mistakes? Do you have any > other idea? > > Thank you very much indeed! > > Pasquale > > On 02/20/2015 02:07 PM, Pasquale Salza wrote: >> Hi Rhys, I suppose so, because these are my iptables rules: > >> iptables -F iptables -t nat -F iptables -P INPUT ACCEPT iptables >> -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -A INPUT -d >> 172.16.58.0/24 >> -m > state --state >> ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d >> 172.16.58.0/24 > >> -p tcp --dport ssh -j ACCEPT iptables -A >> INPUT -d 172.16.58.0/24 > -p tcp --dport www >> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 >> -p tcp --dport >> pptp -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 >> > -p tcp --sport >> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >> -p tcp --dport domain -j ACCEPT iptables >> -A INPUT -d 172.16.58.0/24 > -p udp --sport >> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >> -p udp --dport domain -j ACCEPT iptables >> -A INPUT -d 172.16.58.0/24 > -p gre -j ACCEPT >> iptables -A INPUT -d 172.16.58.0/24 > -p icmp >> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 >> -j DROP iptables >> -t nat -A POSTROUTING -o eth0 -j MASQUERADE service iptables >> save > >> Firstly, do you think I planned the network organisation well? >> Do you have other suggestion (best practices) with 2 interfaces? > > >> 2015-02-20 18:30 GMT+01:00 Rhys Oxenham >> >>: > >> Hi Pasquale, > >> Did you modify your security group rules to allow ICMP and/or >> 22:tcp access? > >> Many thanks Rhys > >>> On 20 Feb 2015, at 17:11, Pasquale Salza >>> >> > >> > wrote: >>> >>> Hi there, I have a lot of problems with RDO/OpenStack >> configuration. Firstly, I need to describe my network situation. >>> >>> I have 7 machine, each of them with 2 NIC. I would like to use >>> one >> machine as a controller/network node and the others as compute >> nodes. >>> >>> I would like to use the eth0 to connect nodes to internet (and >>> get >> access by remote sessions) with the network "172.16.58.0/24 > >> ", in which I have just 7 available IPs, >> and eth1 as configuration network on the network 10.42.100.0/42 > >> . >>> >>> This is my current configuration, for each node (varying the >>> IPs >> on each machine): >>> >>> eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static >>> IPADDR=172.16.58.50 NETMASK=255.255.255.0 >>> GATEWAY=172.16.58.254 DNS1=172.16.58.50 DOMAIN=### >>> DEFROUTE="yes" >>> >>> eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex >>> ONBOOT=yes >>> >>> br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge >>> BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 >>> ONBOOT=yes >>> >>> I'd like to have instances on 10.42.200.0/24 >>> >> virtual private network and the >> remaining IPs of 10.42.100.0/24 >> > network as floating >> IPs. >>> >>> These are the relevant parts of my answers.txt file: >>> >>> CONFIG_CONTROLLER_HOST=10.42.100.1 >>> > > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > >> CONFIG_NETWORK_HOSTS=10.42.100.1 >>> CONFIG_AMQP_HOST=10.42.100.1 CONFIG_MARIADB_HOST=10.42.100.1 >>> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 CONFIG_NOVA_NETWORK_PUBIF=eth1 >>> CONFIG_NOVA_NETWORK_PRIVIF=eth1 >>> CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 >>> >> >>> CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 >>> >> >>> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex >>> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan >>> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan >>> CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 >>> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= >>> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= >>> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= >>> CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 >>> >>> After the installation, I configure the network like this: >>> >>> neutron router-create router neutron net-create private >>> neutron subnet-create private 10.42.200.0/24 >>> >> --name private-subnet >>> neutron router-interface-add router private-subnet neutron >>> net-create public --router:external=True neutron subnet-create >>> public 10.42.100.0/24 >> --name public-subnet --enable_dhcp=False >> --allocation-pool start=10.42.100.100,end=10.42.100.200 >> --no-gateway >>> neutron router-gateway-set router public >>> >>> I'm able to launch instances but I can't get access (ping/ssh) >>> to >> them. >>> >>> I don't know if I'm doing something wrong starting from >>> planning. >>> >>> Please, help me! >>> >>> _______________________________________________ Rdo-list >>> mailing list Rdo-list at redhat.com > > >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > > > > > >> -- Pasquale Salza > >> e-mail: pasquale.salza at gmail.com >> > > >> phone: +39 393 4415978 fax: +39 089 > 8422939 skype: pasquale.salza >> linkedin: http://it.linkedin.com/in/psalza/ > > >> _______________________________________________ Rdo-list mailing >> list Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > Those look like the iptables rule on the hypervisor. Rhys is > talking about the Neutron security group rules. By default, ssh > into VMs is not allowed. You need to permit ICMP and SSH in the > security rules on the neutron network. > > I don't see anything wrong with your network architecture at first > glance, but floating IPs can be tricky at first. Start with basic > VM-to-VM connectivity and add on from there. > > Good luck! > > > _______________________________________________ Rdo-list mailing > list Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > That sounds like it should work, but one of those 6 IP addresses will need to be used for the Neutron router (that IP will be used for SNAT for VMs that have no floating IP). I'm not sure what you mean when you say "I'd like to reserve 6 IPs for 6 VMs I could instanciate on OpenStack." You can instantiate more than one VM on each compute node, and if you have 6 compute nodes then depending on size you could have dozens of VMs. Maybe you just mean you could instantiate 6 VMs with public IPs? Actually, due to the router IP, you would be limited to 5. Make sure you add the floating IP network as an external net. Since your router will not be taking the .1 address, you will need to create the port by hand with the chosen IP and add it to the router. $ neutron net-create externalnet -- --router:external=True $ neutron subnet-create externalnet 172.16.58.0/24 --name external \ - --enable_dhcp=False --allocation_pool start=172.16.58.x,\ end=172.16.58.x --gateway 172.16.58.x (use your network gateway here - change the IP addresses in the allocation range to match what is available on your network) $ neutron router-create extrouter (name of your router) $ neutron port-create externalnet --fixed-ip 172.16.58.x (use desired router IP) $ neutron router-interface-add extrouter port=$portid (port id from previous command) $ neutron router-interface-add extrouter subnet=public (replace public with the name of the 192.168.0.0/24 network) Once that is done, you should be able to assign a floating IP to any VM that has an interface on the 192.168.0.0/24 network. P.S. - Several times in your email you mentioned 192.168.0.0/16, but that's not a valid network. I assume you mean 192.168.0.0/24. - -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | @dxs on twitter -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU59LXAAoJEFkV3ypsGNbjU+AIALtTHElzciDOEn4jzpOppgwO cQWXIWx3ycfvx9mx77XQR99Xp0l+S1L6ZKRrwvQX3KFDFLNINUt19BW9yGHMaA5m g8TeH06vPXrmWIeLH+UwluMhAe8p5aM51UcJyYtkkbpvUroj+xoDsxU5ukbOS6Kr YXUT44Rg1Js7/mSsgo6sIutmMHFpuExQI2ERbFmG1qLIpOSXwFaIsyLGJW+U7T6f 0zSdUGxim6Tw2pBx44C3HAAP70fzP+3xxm14XK3Av/bZELSsVMB31hkvj9oYCe4s uAS3jro9+DUygZ2Yi26znJ+xHVOYzEyZ/RM61FY+OOt4I7wAOtkY++z1WqUVzEA= =2NHc -----END PGP SIGNATURE----- From vedsarkushwaha at gmail.com Sat Feb 21 15:25:48 2015 From: vedsarkushwaha at gmail.com (Vedsar Kushwaha) Date: Sat, 21 Feb 2015 20:55:48 +0530 Subject: [Rdo-list] Unable to SSH to instance Message-ID: I'm trying to connect openstack instance from last couple of weeks, but still not successful. I tried following link: https://openstack.redhat.com/Neutron_with_existing_external_network https://ask.openstack.org/en/question/52698/connecting-to-existing-network-with-rdo-juno-on-centos-7/ Here is my configuration: ifconfig: br-ex: flags=4163 mtu 1500 inet 10.16.37.221 netmask 255.255.255.0 broadcast 10.16.37.255 inet6 fe80::58dc:cdff:fe3c:624a prefixlen 64 scopeid 0x20 ether b0:83:fe:75:95:9c txqueuelen 0 (Ethernet) RX packets 11160 bytes 18527350 (17.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9751 bytes 1061798 (1.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 br-int: flags=4163 mtu 1500 inet6 fe80::c071:edff:fe04:de44 prefixlen 64 scopeid 0x20 ether c2:71:ed:04:de:44 txqueuelen 0 (Ethernet) RX packets 42 bytes 4328 (4.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 648 (648.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 504005 bytes 108257311 (103.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 504005 bytes 108257311 (103.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 p2p1: flags=4163 mtu 1500 inet6 fe80::b283:feff:fe75:959c prefixlen 64 scopeid 0x20 ether b0:83:fe:75:95:9c txqueuelen 1000 (Ethernet) RX packets 356442 bytes 372192516 (354.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 157458 bytes 12175539 (11.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 nova secgroup-list-rules default: +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | | icmp | -1 | -1 | 0.0.0.0/0 | | | | | | | default | | | | | | default | +-------------+-----------+---------+-----------+--------------+ sudo ovs-vsctl show: 077937f9-cf9d-40ca-af2b-f435153595d5 Bridge br-int fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-ex Port "p2p1" Interface "p2p1" Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal ovs_version: "2.1.3" Please help.. -- Vedsar Kushwaha M.Tech-Computational Science Indian Institute of Science -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at rhcloud.com Sat Feb 21 19:22:39 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Sat, 21 Feb 2015 14:22:39 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 47 - Fixed! In-Reply-To: <8537744.19.1424465564810.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> References: <8537744.19.1424465564810.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> Message-ID: <32300590.21.1424546559961.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 47 - Fixed: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/47/ to view the results. From pasquale.salza at gmail.com Sat Feb 21 21:27:26 2015 From: pasquale.salza at gmail.com (Pasquale Salza) Date: Sat, 21 Feb 2015 22:27:26 +0100 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: <54E8EBC4.4000404@redhat.com> References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> <54E7B305.9030102@redhat.com> <54E7D2D7.8070203@redhat.com> <54E8EBC4.4000404@redhat.com> Message-ID: I have a question. If I want to add any public network, do I need to statically assign every compute node to the same network on one of the interfaces? I mean, in order to access to VMs which have the floating IP on that network. For example, having the VMs on 172.16.58.0/24 external network and compute nodes with interfaces assigned with different networks. Il 21/feb/2015 21:34 "Dan Sneddon" ha scritto: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 02/21/2015 12:14 AM, Pasquale Salza wrote: > > Thank you! Yes you were right, I meant to chose 6 VMs and give them > > 6 IPs. I forgot the router IP. > > > > Is there any problem in not giving direct internet access to > > machines, but using IP forwarding on controller? > > > > Il 21/feb/2015 01:35 "Dan Sneddon" > > ha scritto: > > > > On 02/20/2015 03:29 PM, Pasquale Salza wrote: > >> Whops! I figured out just few seconds after I sent the mail! Ok, > >> tomorrow I'll try with it. :) I'd like to share how I want to > >> organise my network in order to get some advices. > > > >> Let's say I have 7 machines and 7 spare IPs on the network > >> 172.16.58.0/24 > > which are also associated to > >> 7 public (internet) IPs. > > > >> I'd like to reserve 6 IPs for 6 VMs I could instanciate on > >> OpenStack. > > > >> So I planned to do this: the controller node has a static IP on > >> eth0 of the 7 in 172.16.58.50/24 > > network > >> so as I can access it from outside. I add an alias eth0:0 with > >> which I connect the controller to the Management network of > >> OpenStack, the 10.0.1.0/24 > > network. Also on > >> the controller, I set statically the IP for eth1 with one of > >> float IPs network 192.168.0.0/16 > > network. With > >> iptables, I add the rule of forwarding everithing on eth0 and > >> eth1, so the other nodes can get Internet access on network > >> 10.0.1.0/24 . > > > >> On the compute nodes I set eth0 as one of IPs on 10.0.1.0/24 > > > >> management network and eth1 as one on > >> 192.168.0.0/16 . > > > >> Om each node I put the bridge on eth1. > > > >> With RDO I put virtualisation and tunneling only on eth1. > > > >> When the installatation has finished, I create a private neutron > >> network 10.100.0.0/16 > > and two public > >> networks of floating IPs. The first is 192.168.0.0/24 > > > >> for any kind of VM. The other is the > >> 172.16.58.0/24 > > network, limited to the 6 > >> available IPs with which I can put virtual machines on Internet. > > > >> Does it make sense or I'm doing some mistakes? Do you have any > >> other idea? > > > >> Thank you very much indeed! > > > >> Pasquale > > > >> On 02/20/2015 02:07 PM, Pasquale Salza wrote: > >>> Hi Rhys, I suppose so, because these are my iptables rules: > > > >>> iptables -F iptables -t nat -F iptables -P INPUT ACCEPT > >>> iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables > >>> -A INPUT -d 172.16.58.0/24 > >>> > > > >>> -m > >> state --state > >>> ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d > >>> 172.16.58.0/24 > >> > >>> -p tcp --dport ssh -j ACCEPT iptables > >>> -A INPUT -d 172.16.58.0/24 > > > >> -p tcp --dport www > >>> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >>> > >>> -p tcp --dport pptp -j ACCEPT iptables > >>> -A INPUT -d 172.16.58.0/24 > > > >>> > >> -p tcp --sport > >>> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> -p tcp --dport domain -j ACCEPT > >>> iptables -A INPUT -d 172.16.58.0/24 > > > >> -p udp --sport > >>> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> -p udp --dport domain -j ACCEPT > >>> iptables -A INPUT -d 172.16.58.0/24 > > > >> -p gre -j ACCEPT > >>> iptables -A INPUT -d 172.16.58.0/24 > > > >> -p icmp > >>> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >>> > >>> -j DROP iptables -t nat -A POSTROUTING > >>> -o eth0 -j MASQUERADE service iptables save > > > >>> Firstly, do you think I planned the network organisation well? > >>> Do you have other suggestion (best practices) with 2 > >>> interfaces? > > > > > >>> 2015-02-20 18:30 GMT+01:00 Rhys Oxenham > > >> > > >>> > > >>>: > > > >>> Hi Pasquale, > > > >>> Did you modify your security group rules to allow ICMP and/or > >>> 22:tcp access? > > > >>> Many thanks Rhys > > > >>>> On 20 Feb 2015, at 17:11, Pasquale Salza > >>>> > > > > > >>> >>> > >>> >>> >>> > >> wrote: > >>>> > >>>> Hi there, I have a lot of problems with RDO/OpenStack > >>> configuration. Firstly, I need to describe my network > >>> situation. > >>>> > >>>> I have 7 machine, each of them with 2 NIC. I would like to > >>>> use one > >>> machine as a controller/network node and the others as compute > >>> nodes. > >>>> > >>>> I would like to use the eth0 to connect nodes to internet > >>>> (and get > >>> access by remote sessions) with the network "172.16.58.0/24 > > > >> > >>> ", in which I have just 7 available > >>> IPs, and eth1 as configuration network on the network > >>> 10.42.100.0/42 > > > >> > >>> . > >>>> > >>>> This is my current configuration, for each node (varying the > >>>> IPs > >>> on each machine): > >>>> > >>>> eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static > >>>> IPADDR=172.16.58.50 NETMASK=255.255.255.0 > >>>> GATEWAY=172.16.58.254 DNS1=172.16.58.50 DOMAIN=### > >>>> DEFROUTE="yes" > >>>> > >>>> eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs > >>>> OVS_BRIDGE=br-ex ONBOOT=yes > >>>> > >>>> br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge > >>>> BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 > >>>> ONBOOT=yes > >>>> > >>>> I'd like to have instances on 10.42.200.0/24 > >>>> > >>> virtual private network and the > >>> remaining IPs of 10.42.100.0/24 > > > >>> > >> network as floating > >>> IPs. > >>>> > >>>> These are the relevant parts of my answers.txt file: > >>>> > >>>> CONFIG_CONTROLLER_HOST=10.42.100.1 > >>>> > > > > > > > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > > > > > > >>> CONFIG_NETWORK_HOSTS=10.42.100.1 > >>>> CONFIG_AMQP_HOST=10.42.100.1 CONFIG_MARIADB_HOST=10.42.100.1 > >>>> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 > >>>> CONFIG_NOVA_NETWORK_PUBIF=eth1 > >>>> CONFIG_NOVA_NETWORK_PRIVIF=eth1 > >>>> CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > > > >>>> > >>> > >>>> CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > > > >>>> > >>> > >>>> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex > >>>> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan > >>>> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan > >>>> CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 > >>>> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= > >>>> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= > >>>> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= > >>>> CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 > >>>> > >>>> After the installation, I configure the network like this: > >>>> > >>>> neutron router-create router neutron net-create private > >>>> neutron subnet-create private 10.42.200.0/24 > >>>> > >>> --name private-subnet > >>>> neutron router-interface-add router private-subnet neutron > >>>> net-create public --router:external=True neutron > >>>> subnet-create public 10.42.100.0/24 > > > >>> --name public-subnet > >>> --enable_dhcp=False --allocation-pool > >>> start=10.42.100.100,end=10.42.100.200 --no-gateway > >>>> neutron router-gateway-set router public > >>>> > >>>> I'm able to launch instances but I can't get access > >>>> (ping/ssh) to > >>> them. > >>>> > >>>> I don't know if I'm doing something wrong starting from > >>>> planning. > >>>> > >>>> Please, help me! > >>>> > >>>> _______________________________________________ Rdo-list > >>>> mailing list Rdo-list at redhat.com > >>>> > > > > >> > > >> > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > >> > > > >>> > > >> > >> > > > > > > > > > >>> -- Pasquale Salza > > > >>> e-mail: pasquale.salza at gmail.com > >>> > >>> >>> > > >> >> > >> >> >> > >>> phone: +39 393 4415978 > > fax: +39 089 > >> 8422939 skype: pasquale.salza > >>> linkedin: http://it.linkedin.com/in/psalza/ > > > > > >>> _______________________________________________ Rdo-list > >>> mailing list Rdo-list at redhat.com > > > > >>> https://www.redhat.com/mailman/listinfo/rdo-list > > > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > >> > > > > > > > >> Those look like the iptables rule on the hypervisor. Rhys is > >> talking about the Neutron security group rules. By default, ssh > >> into VMs is not allowed. You need to permit ICMP and SSH in the > >> security rules on the neutron network. > > > >> I don't see anything wrong with your network architecture at > >> first glance, but floating IPs can be tricky at first. Start with > >> basic VM-to-VM connectivity and add on from there. > > > >> Good luck! > > > > > >> _______________________________________________ Rdo-list mailing > >> list Rdo-list at redhat.com > > > > >> https://www.redhat.com/mailman/listinfo/rdo-list > > > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > >> > > > > > > That sounds like it should work, but one of those 6 IP addresses > > will need to be used for the Neutron router (that IP will be used > > for SNAT for VMs that have no floating IP). > > > > I'm not sure what you mean when you say "I'd like to reserve 6 IPs > > for 6 VMs I could instanciate on OpenStack." You can instantiate > > more than one VM on each compute node, and if you have 6 compute > > nodes then depending on size you could have dozens of VMs. Maybe > > you just mean you could instantiate 6 VMs with public IPs? > > Actually, due to the router IP, you would be limited to 5. > > > > Make sure you add the floating IP network as an external net. > > Since your router will not be taking the .1 address, you will need > > to create the port by hand with the chosen IP and add it to the > > router. > > > > $ neutron net-create externalnet -- --router:external=True $ > > neutron subnet-create externalnet 172.16.58.0/24 > > --name external \ --enable_dhcp=False > > --allocation_pool start=172.16.58.x,\ end=172.16.58.x --gateway > > 172.16.58.x (use your network gateway here - change the IP > > addresses in the allocation range to match what is available on > > your network) $ neutron router-create extrouter (name of your > > router) $ neutron port-create externalnet --fixed-ip 172.16.58.x > > (use desired router IP) $ neutron router-interface-add extrouter > > port=$portid (port id from previous command) $ neutron > > router-interface-add extrouter subnet=public (replace public with > > the name of the 192.168.0.0/24 network) > > > > Once that is done, you should be able to assign a floating IP to > > any VM that has an interface on the 192.168.0.0/24 > > network. > > > > P.S. - Several times in your email you mentioned 192.168.0.0/16 > > , but that's not a valid network. I assume > > you mean 192.168.0.0/24 . > > > > > > That depends what you are trying to do. There are plenty of reasons > why it might not work at first. You may need to troubleshoot. > > One issue that might come up is that you will be doing multiple levels > of NAT. Some protocols won't work with multiple layers of translation. > > If your goal is to eventually make these VMs reachable from the > Internet, there are a lot of factors in play above the OpenStack cloud. > > - -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | @dxs on twitter > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJU6OvEAAoJEFkV3ypsGNbjPyAH/1IAaeow2xMa5jn3Qm5x1OvZ > o1trjIuR3VoYCwGYhM8s6lv1spAq44xFEG/bBjX6FDQlTgbpUFWeJupS6DeTyx9J > k3k7MCtnM0hcEsoOfYoq3J/rRXhPk/fvYKHpknbA89xsby91qq9aLoEUdAABFzEJ > 5Z3sa2mvf3D68VP9XBicRdi+ZWmsO+LF25kdpNxmZncanShj+EFkyJbkUgZOCfkR > YiXswP4khAL91afY2VXkzVYG9DgRqmZGMq7SFXOVPsKZ4VnBwbZwduVQJFrVBGzg > FSTIKE+kMucPB3VRetezY0tqI+g/PMkZk+/4pDM8EGM4RfjHGCZhKSrlZ5h/1H4= > =BElH > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Sun Feb 22 08:03:39 2015 From: dsneddon at redhat.com (Dan Sneddon) Date: Sun, 22 Feb 2015 00:03:39 -0800 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> <54E7B305.9030102@redhat.com> <54E7D2D7.8070203@redhat.com> <54E8EBC4.4000404@redhat.com> Message-ID: <54E98D5B.9080108@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/21/2015 01:27 PM, Pasquale Salza wrote: > I have a question. If I want to add any public network, do I need > to statically assign every compute node to the same network on one > of the interfaces? I mean, in order to access to VMs which have the > floating IP on that network. > > For example, having the VMs on 172.16.58.0/24 > external network and compute nodes with > interfaces assigned with different networks. > > Il 21/feb/2015 21:34 "Dan Sneddon" > ha scritto: > > On 02/21/2015 12:14 AM, Pasquale Salza wrote: >> Thank you! Yes you were right, I meant to chose 6 VMs and give >> them 6 IPs. I forgot the router IP. > >> Is there any problem in not giving direct internet access to >> machines, but using IP forwarding on controller? > >> Il 21/feb/2015 01:35 "Dan Sneddon" >> >> ha >> scritto: > >> On 02/20/2015 03:29 PM, Pasquale Salza wrote: >>> Whops! I figured out just few seconds after I sent the mail! >>> Ok, tomorrow I'll try with it. :) I'd like to share how I want >>> to organise my network in order to get some advices. > >>> Let's say I have 7 machines and 7 spare IPs on the network >>> 172.16.58.0/24 > >> which are also associated to >>> 7 public (internet) IPs. > >>> I'd like to reserve 6 IPs for 6 VMs I could instanciate on >>> OpenStack. > >>> So I planned to do this: the controller node has a static IP >>> on eth0 of the 7 in 172.16.58.50/24 > >> network >>> so as I can access it from outside. I add an alias eth0:0 with >>> which I connect the controller to the Management network of >>> OpenStack, the 10.0.1.0/24 >>> >> network. Also on >>> the controller, I set statically the IP for eth1 with one of >>> float IPs network 192.168.0.0/16 > >> network. With >>> iptables, I add the rule of forwarding everithing on eth0 and >>> eth1, so the other nodes can get Internet access on network >>> 10.0.1.0/24 > . > >>> On the compute nodes I set eth0 as one of IPs on 10.0.1.0/24 > >> >>> management network and eth1 as one on >>> 192.168.0.0/16 > . > >>> Om each node I put the bridge on eth1. > >>> With RDO I put virtualisation and tunneling only on eth1. > >>> When the installatation has finished, I create a private >>> neutron network 10.100.0.0/16 >>> >> and two public >>> networks of floating IPs. The first is 192.168.0.0/24 > >> >>> for any kind of VM. The other is the >>> 172.16.58.0/24 > >> network, limited to the 6 >>> available IPs with which I can put virtual machines on >>> Internet. > >>> Does it make sense or I'm doing some mistakes? Do you have any >>> other idea? > >>> Thank you very much indeed! > >>> Pasquale > >>> On 02/20/2015 02:07 PM, Pasquale Salza wrote: >>>> Hi Rhys, I suppose so, because these are my iptables rules: > >>>> iptables -F iptables -t nat -F iptables -P INPUT ACCEPT >>>> iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT >>>> iptables -A INPUT -d 172.16.58.0/24 > >>>> >> >>>> -m >>> state --state >>>> ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d >>>> 172.16.58.0/24 >>>> >>> >>>> -p tcp --dport ssh -j ACCEPT >>>> iptables -A INPUT -d 172.16.58.0/24 > >> >>> -p tcp --dport www >>>> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >>>> >>>> -p tcp --dport pptp -j ACCEPT >>>> iptables -A INPUT -d 172.16.58.0/24 >> >>>> >>> -p tcp --sport >>>> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >> >>> >>>> -p tcp --dport domain -j ACCEPT >>>> iptables -A INPUT -d 172.16.58.0/24 > >> >>> -p udp --sport >>>> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >> >>> >>>> -p udp --dport domain -j ACCEPT >>>> iptables -A INPUT -d 172.16.58.0/24 > >> >>> -p gre -j ACCEPT >>>> iptables -A INPUT -d 172.16.58.0/24 > >> >>> -p icmp >>>> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > >>>> >>>> -j DROP iptables -t nat -A >>>> POSTROUTING -o eth0 -j MASQUERADE service iptables save > >>>> Firstly, do you think I planned the network organisation >>>> well? Do you have other suggestion (best practices) with 2 >>>> interfaces? > > >>>> 2015-02-20 18:30 GMT+01:00 Rhys Oxenham >> > >>> > >> >>>> > > >> > >>>>: > >>>> Hi Pasquale, > >>>> Did you modify your security group rules to allow ICMP >>>> and/or 22:tcp access? > >>>> Many thanks Rhys > >>>>> On 20 Feb 2015, at 17:11, Pasquale Salza >>>>> >>>> > > >> > >> > >> >>>> >>> >>>> >>> > >>>> >>> >>>> >>>> >>> wrote: >>>>> >>>>> Hi there, I have a lot of problems with RDO/OpenStack >>>> configuration. Firstly, I need to describe my network >>>> situation. >>>>> >>>>> I have 7 machine, each of them with 2 NIC. I would like to >>>>> use one >>>> machine as a controller/network node and the others as >>>> compute nodes. >>>>> >>>>> I would like to use the eth0 to connect nodes to internet >>>>> (and get >>>> access by remote sessions) with the network "172.16.58.0/24 > >> >>> >>>> ", in which I have just 7 available >>>> IPs, and eth1 as configuration network on the network >>>> 10.42.100.0/42 >> >>> >>>> . >>>>> >>>>> This is my current configuration, for each node (varying >>>>> the IPs >>>> on each machine): >>>>> >>>>> eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes >>>>> BOOTPROTO=static IPADDR=172.16.58.50 NETMASK=255.255.255.0 >>>>> GATEWAY=172.16.58.254 DNS1=172.16.58.50 DOMAIN=### >>>>> DEFROUTE="yes" >>>>> >>>>> eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs >>>>> OVS_BRIDGE=br-ex ONBOOT=yes >>>>> >>>>> br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge >>>>> BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 >>>>> ONBOOT=yes >>>>> >>>>> I'd like to have instances on 10.42.200.0/24 > >>>>> >>>> virtual private network and the >>>> remaining IPs of 10.42.100.0/24 > >> >>>> >>> network as floating >>>> IPs. >>>>> >>>>> These are the relevant parts of my answers.txt file: >>>>> >>>>> CONFIG_CONTROLLER_HOST=10.42.100.1 >>>>> > > > > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > > > >>>> CONFIG_NETWORK_HOSTS=10.42.100.1 >>>>> CONFIG_AMQP_HOST=10.42.100.1 >>>>> CONFIG_MARIADB_HOST=10.42.100.1 >>>>> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 >>>>> CONFIG_NOVA_NETWORK_PUBIF=eth1 >>>>> CONFIG_NOVA_NETWORK_PRIVIF=eth1 >>>>> CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > >> >>>>> >>>> >>>>> CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > >> >>>>> >>>> >>>>> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex >>>>> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan >>>>> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan >>>>> CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 >>>>> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= >>>>> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= >>>>> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= >>>>> CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 >>>>> >>>>> After the installation, I configure the network like this: >>>>> >>>>> neutron router-create router neutron net-create private >>>>> neutron subnet-create private 10.42.200.0/24 > >>>>> >>>> --name private-subnet >>>>> neutron router-interface-add router private-subnet neutron >>>>> net-create public --router:external=True neutron >>>>> subnet-create public 10.42.100.0/24 >>>>> > >> >>>> --name public-subnet >>>> --enable_dhcp=False --allocation-pool >>>> start=10.42.100.100,end=10.42.100.200 --no-gateway >>>>> neutron router-gateway-set router public >>>>> >>>>> I'm able to launch instances but I can't get access >>>>> (ping/ssh) to >>>> them. >>>>> >>>>> I don't know if I'm doing something wrong starting from >>>>> planning. >>>>> >>>>> Please, help me! >>>>> >>>>> _______________________________________________ Rdo-list >>>>> mailing list Rdo-list at redhat.com >>>>> >>>> > >> > >> >>> > > >> > >>> >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >>> >> >> >>>> >> > >>> >> >>> > > > > >>>> -- Pasquale Salza > >>>> e-mail: pasquale.salza at gmail.com >>>> >>>> >>> > >>>> >>> >>>> >>> >> >>> >> >>> >> > >>> >> >>> >> >>> >>>> phone: +39 393 4415978 > >> fax: +39 089 >>> 8422939 skype: pasquale.salza >>>> linkedin: http://it.linkedin.com/in/psalza/ > > >>>> _______________________________________________ Rdo-list >>>> mailing list Rdo-list at redhat.com >>>> > > >> > >> >>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >>> >> >> > > >>> Those look like the iptables rule on the hypervisor. Rhys is >>> talking about the Neutron security group rules. By default, >>> ssh into VMs is not allowed. You need to permit ICMP and SSH in >>> the security rules on the neutron network. > >>> I don't see anything wrong with your network architecture at >>> first glance, but floating IPs can be tricky at first. Start >>> with basic VM-to-VM connectivity and add on from there. > >>> Good luck! > > >>> _______________________________________________ Rdo-list >>> mailing list Rdo-list at redhat.com > > >> > >> >>> https://www.redhat.com/mailman/listinfo/rdo-list > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > >> > >>> >> >> > >> That sounds like it should work, but one of those 6 IP addresses >> will need to be used for the Neutron router (that IP will be >> used for SNAT for VMs that have no floating IP). > >> I'm not sure what you mean when you say "I'd like to reserve 6 >> IPs for 6 VMs I could instanciate on OpenStack." You can >> instantiate more than one VM on each compute node, and if you >> have 6 compute nodes then depending on size you could have dozens >> of VMs. Maybe you just mean you could instantiate 6 VMs with >> public IPs? Actually, due to the router IP, you would be limited >> to 5. > >> Make sure you add the floating IP network as an external net. >> Since your router will not be taking the .1 address, you will >> need to create the port by hand with the chosen IP and add it to >> the router. > >> $ neutron net-create externalnet -- --router:external=True $ >> neutron subnet-create externalnet 172.16.58.0/24 > >> --name external \ --enable_dhcp=False >> --allocation_pool start=172.16.58.x,\ end=172.16.58.x --gateway >> 172.16.58.x (use your network gateway here - change the IP >> addresses in the allocation range to match what is available on >> your network) $ neutron router-create extrouter (name of your >> router) $ neutron port-create externalnet --fixed-ip 172.16.58.x >> (use desired router IP) $ neutron router-interface-add extrouter >> port=$portid (port id from previous command) $ neutron >> router-interface-add extrouter subnet=public (replace public >> with the name of the 192.168.0.0/24 > network) > >> Once that is done, you should be able to assign a floating IP to >> any VM that has an interface on the 192.168.0.0/24 > >> network. > >> P.S. - Several times in your email you mentioned 192.168.0.0/16 > >> , but that's not a valid network. I >> assume you mean 192.168.0.0/24 > . > > > > That depends what you are trying to do. There are plenty of > reasons why it might not work at first. You may need to > troubleshoot. > > One issue that might come up is that you will be doing multiple > levels of NAT. Some protocols won't work with multiple layers of > translation. > > If your goal is to eventually make these VMs reachable from the > Internet, there are a lot of factors in play above the OpenStack > cloud. > > No, the external network is only attached to the Neutron controller. The public IP actually lives on the l3agent, which runs the router you created and attached to that network. When traffic goes back and forth from outside, the l3agent does source NAT and swaps the public IP with the VM IP. The controller isn't actually attached to the external network. In general, the only IPs in use on the External network are the IP you assign to the router attached to the External network, the upstream gateway router, and the floating IPs handled by Neutron. If a VM doesn't have a floating IP, the Neutron router will use its own IP address for the NAT. That Internet access is outbound-only. - -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | @dxs on twitter -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJU6Y1bAAoJEFkV3ypsGNbjTIMH/iajE5q30wfCKcghkWaTu0AW VckXJyPSdtucrewUb+oUriGFx3OPMZU1hnGxCYqDTjsj/iTx3JsSFzCozmKzdXAY hWEO/nNmD4lWljWghjTac13t+6rhM5lJVA3posQoZEPWwyrdh6bmcHwCM93HYZ3H QYaXv7RKasSool6Kq9MxOyRq2+O0DvmVWk8BOKHzy2ZnP1OrRjhotSRIRIh1O3Ti 3PEYZJ+QZOzxAMfWDWcRjNONuGscaIVvPxrU5/i6jH5FK1ymJarIRJmVPO1a58BW cYEcsuz/L6wYhaYthRCY14EkLQ7bsSTT4JMse68s0/u3WgQPyjZOR2NBk6QAAu8= =0N5i -----END PGP SIGNATURE----- From no-reply at rhcloud.com Sun Feb 22 19:11:06 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Sun, 22 Feb 2015 14:11:06 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 48 - Failure! Message-ID: <2663997.23.1424632266684.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 48 - Failure: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/48/ to view the results. From rbowen at redhat.com Sun Feb 22 19:40:35 2015 From: rbowen at redhat.com (Rich Bowen) Date: Sun, 22 Feb 2015 11:40:35 -0800 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54E5A01E.5090706@berendt.io> References: <54E36A33.2020907@redhat.com> <54E5A01E.5090706@berendt.io> Message-ID: <54EA30B3.40608@redhat.com> On 02/19/2015 12:34 AM, Christian Berendt wrote: > Regarding trystack.org: Is it still not possible to use trystack.org > without a Facebook account? It does indeed use Facebook as the auth mechanism. I'm not aware that there's any plan to change that. Dan Radez would know if there were. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From abeekhof at redhat.com Sun Feb 22 20:08:32 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Mon, 23 Feb 2015 07:08:32 +1100 Subject: [Rdo-list] High Availability configuration In-Reply-To: <54D8EA87.2040608@redhat.com> References: <54D8EA87.2040608@redhat.com> Message-ID: > On 10 Feb 2015, at 4:12 am, Perry Myers wrote: > > On 02/09/2015 12:44 PM, Alon Dotan wrote: >> Dear All, >> >> Someone managed to configure High Availability? >> >> My setup contains 2 CentOS 7 controllers and about 15 compute nodes, > > I think an HA config will require a minimum of 3 controller nodes > (primarily because RabbitMQ and Galera operate in odd numbered clusters) > > abeekhof is working on getting better docs on the HA stuff on our wiki > but you can ask questions here though There are still a lot of blanks to fill in, but the WIP document is currently available at: https://github.com/beekhof/osp-ha-deploy/blob/master/ha-openstack.md > >> I want to configure High Availability between the controllers only >> >> Thanks, >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > From abeekhof at redhat.com Mon Feb 23 01:39:12 2015 From: abeekhof at redhat.com (Andrew Beekhof) Date: Mon, 23 Feb 2015 12:39:12 +1100 Subject: [Rdo-list] Rdo-list Digest, Vol 23, Issue 4 In-Reply-To: <20150205164850.GB26774@redhat.com> References: <20150205141106.GA26774@redhat.com> <90BDAED4-9569-4583-879E-B6FF31008FB7@redhat.com> <20150205164850.GB26774@redhat.com> Message-ID: <866E436F-1A73-4770-BC46-69762229EE0B@redhat.com> > On 6 Feb 2015, at 3:48 am, Lars Kellogg-Stedman wrote: > > On Thu, Feb 05, 2015 at 10:42:18AM -0500, Andrew Beekhof wrote: >> Perhaps it should though. > > Possibly! But that is a different question, and not one I can answer > :). I do like having a convenient CLI-based installation tool; it makes > testing much faster than having to roll out a web-based deployment > tool. Exactly. The easier it is to install, the more it will get tested. If anyone is interested in working on this, I have some thoughts on how to achieve it but so far have lacked the free cycles to make any progress. From pasquale.salza at gmail.com Mon Feb 23 08:48:45 2015 From: pasquale.salza at gmail.com (Pasquale Salza) Date: Mon, 23 Feb 2015 09:48:45 +0100 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: <54E98D5B.9080108@redhat.com> References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> <54E7B305.9030102@redhat.com> <54E7D2D7.8070203@redhat.com> <54E8EBC4.4000404@redhat.com> <54E98D5B.9080108@redhat.com> Message-ID: Goodmorning guys, I tried as you said by I have serious problems to connect to Instances. I tried to do this: - give each compute node a fixed ip on network 10.42.1.0/24 on port eth0; - give each compute node a fixed ip on network 10.42.2.0/24 on port eth1 (through the br-ex) I put everything on eth1 with vxlan, this is my configuration: CONFIG_NOVA_COMPUTE_PRIVIF=eth1 CONFIG_NOVA_NETWORK_PUBIF=eth1 CONFIG_NOVA_NETWORK_PRIVIF=eth1 CONFIG_NOVA_NETWORK_FIXEDRANGE=10.0.2.0/24 CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.42.0/24 CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= CONFIG_NEUTRON_OVS_BRIDGE_IFACES= CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 So I launched this network configuration: neutron net-create private neutron subnet-create private 10.42.2.0/24 --name private-subnet neutron net-create public --router:external=True neutron subnet-create public 10.42.42.0/24 --name public-subnet --enable_dhcp=False --allocation-pool=start=10.42.42.100,end=10.42.42.200 --gateway=10.42.42.1 neutron router-create public-router neutron router-gateway-set public-router public neutron router-interface-add public-router private-subnet neutron security-group-rule-create --protocol icmp default neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 default The dashboard says that the gateway of router is on 10.42.42.100 and the port is down. Please help me! :( 2015-02-22 9:03 GMT+01:00 Dan Sneddon : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 02/21/2015 01:27 PM, Pasquale Salza wrote: > > I have a question. If I want to add any public network, do I need > > to statically assign every compute node to the same network on one > > of the interfaces? I mean, in order to access to VMs which have the > > floating IP on that network. > > > > For example, having the VMs on 172.16.58.0/24 > > external network and compute nodes with > > interfaces assigned with different networks. > > > > Il 21/feb/2015 21:34 "Dan Sneddon" > > ha scritto: > > > > On 02/21/2015 12:14 AM, Pasquale Salza wrote: > >> Thank you! Yes you were right, I meant to chose 6 VMs and give > >> them 6 IPs. I forgot the router IP. > > > >> Is there any problem in not giving direct internet access to > >> machines, but using IP forwarding on controller? > > > >> Il 21/feb/2015 01:35 "Dan Sneddon" > > >> >> ha > >> scritto: > > > >> On 02/20/2015 03:29 PM, Pasquale Salza wrote: > >>> Whops! I figured out just few seconds after I sent the mail! > >>> Ok, tomorrow I'll try with it. :) I'd like to share how I want > >>> to organise my network in order to get some advices. > > > >>> Let's say I have 7 machines and 7 spare IPs on the network > >>> 172.16.58.0/24 > > > >> which are also associated to > >>> 7 public (internet) IPs. > > > >>> I'd like to reserve 6 IPs for 6 VMs I could instanciate on > >>> OpenStack. > > > >>> So I planned to do this: the controller node has a static IP > >>> on eth0 of the 7 in 172.16.58.50/24 > > > >> network > >>> so as I can access it from outside. I add an alias eth0:0 with > >>> which I connect the controller to the Management network of > >>> OpenStack, the 10.0.1.0/24 > >>> > >> network. Also on > >>> the controller, I set statically the IP for eth1 with one of > >>> float IPs network 192.168.0.0/16 > > > >> network. With > >>> iptables, I add the rule of forwarding everithing on eth0 and > >>> eth1, so the other nodes can get Internet access on network > >>> 10.0.1.0/24 > > . > > > >>> On the compute nodes I set eth0 as one of IPs on 10.0.1.0/24 > > > >> > >>> management network and eth1 as one on > >>> 192.168.0.0/16 > > . > > > >>> Om each node I put the bridge on eth1. > > > >>> With RDO I put virtualisation and tunneling only on eth1. > > > >>> When the installatation has finished, I create a private > >>> neutron network 10.100.0.0/16 > >>> > >> and two public > >>> networks of floating IPs. The first is 192.168.0.0/24 > > > >> > >>> for any kind of VM. The other is the > >>> 172.16.58.0/24 > > > >> network, limited to the 6 > >>> available IPs with which I can put virtual machines on > >>> Internet. > > > >>> Does it make sense or I'm doing some mistakes? Do you have any > >>> other idea? > > > >>> Thank you very much indeed! > > > >>> Pasquale > > > >>> On 02/20/2015 02:07 PM, Pasquale Salza wrote: > >>>> Hi Rhys, I suppose so, because these are my iptables rules: > > > >>>> iptables -F iptables -t nat -F iptables -P INPUT ACCEPT > >>>> iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT > >>>> iptables -A INPUT -d 172.16.58.0/24 > > > >>>> > >> > >>>> -m > >>> state --state > >>>> ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -d > >>>> 172.16.58.0/24 > >>>> > >>> > >>>> -p tcp --dport ssh -j ACCEPT > >>>> iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> -p tcp --dport www > >>>> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > > > >>>> > >>>> -p tcp --dport pptp -j ACCEPT > >>>> iptables -A INPUT -d 172.16.58.0/24 > >> > >>>> > >>> -p tcp --sport > >>>> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> > >>>> -p tcp --dport domain -j ACCEPT > >>>> iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> -p udp --sport > >>>> domain -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> > >>>> -p udp --dport domain -j ACCEPT > >>>> iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> -p gre -j ACCEPT > >>>> iptables -A INPUT -d 172.16.58.0/24 > > > >> > >>> -p icmp > >>>> -j ACCEPT iptables -A INPUT -d 172.16.58.0/24 > > > >>>> > >>>> -j DROP iptables -t nat -A > >>>> POSTROUTING -o eth0 -j MASQUERADE service iptables save > > > >>>> Firstly, do you think I planned the network organisation > >>>> well? Do you have other suggestion (best practices) with 2 > >>>> interfaces? > > > > > >>>> 2015-02-20 18:30 GMT+01:00 Rhys Oxenham > > >> > > >>> > > >> > >>>> > > > > >> > > >>>>: > > > >>>> Hi Pasquale, > > > >>>> Did you modify your security group rules to allow ICMP > >>>> and/or 22:tcp access? > > > >>>> Many thanks Rhys > > > >>>>> On 20 Feb 2015, at 17:11, Pasquale Salza > >>>>> >>>>> > > > > > >> >> > >> >> >> > >>>> >>>> > >>>> >>>> > > >>>> >>>> > >>>> > >>>> > >>> wrote: > >>>>> > >>>>> Hi there, I have a lot of problems with RDO/OpenStack > >>>> configuration. Firstly, I need to describe my network > >>>> situation. > >>>>> > >>>>> I have 7 machine, each of them with 2 NIC. I would like to > >>>>> use one > >>>> machine as a controller/network node and the others as > >>>> compute nodes. > >>>>> > >>>>> I would like to use the eth0 to connect nodes to internet > >>>>> (and get > >>>> access by remote sessions) with the network "172.16.58.0/24 > > > >> > >>> > >>>> ", in which I have just 7 available > >>>> IPs, and eth1 as configuration network on the network > >>>> 10.42.100.0/42 > >> > >>> > >>>> . > >>>>> > >>>>> This is my current configuration, for each node (varying > >>>>> the IPs > >>>> on each machine): > >>>>> > >>>>> eth0: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes > >>>>> BOOTPROTO=static IPADDR=172.16.58.50 NETMASK=255.255.255.0 > >>>>> GATEWAY=172.16.58.254 DNS1=172.16.58.50 DOMAIN=### > >>>>> DEFROUTE="yes" > >>>>> > >>>>> eth1: DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs > >>>>> OVS_BRIDGE=br-ex ONBOOT=yes > >>>>> > >>>>> br-ex: DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge > >>>>> BOOTPROTO=static IPADDR=10.42.100.1 NETMASK=255.255.255.0 > >>>>> ONBOOT=yes > >>>>> > >>>>> I'd like to have instances on 10.42.200.0/24 > > > >>>>> > >>>> virtual private network and the > >>>> remaining IPs of 10.42.100.0/24 > > > >> > >>>> > >>> network as floating > >>>> IPs. > >>>>> > >>>>> These are the relevant parts of my answers.txt file: > >>>>> > >>>>> CONFIG_CONTROLLER_HOST=10.42.100.1 > >>>>> > > > > > > > > > CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 > > > > > > > > > >>>> CONFIG_NETWORK_HOSTS=10.42.100.1 > >>>>> CONFIG_AMQP_HOST=10.42.100.1 > >>>>> CONFIG_MARIADB_HOST=10.42.100.1 > >>>>> CONFIG_NOVA_COMPUTE_PRIVIF=eth1 > >>>>> CONFIG_NOVA_NETWORK_PUBIF=eth1 > >>>>> CONFIG_NOVA_NETWORK_PRIVIF=eth1 > >>>>> CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 > > > >> > >>>>> > >>>> > >>>>> CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 > > > >> > >>>>> > >>>> > >>>>> CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex > >>>>> CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan > >>>>> CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan > >>>>> CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 > >>>>> CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= > >>>>> CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= > >>>>> CONFIG_NEUTRON_OVS_BRIDGE_IFACES= > >>>>> CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 > >>>>> > >>>>> After the installation, I configure the network like this: > >>>>> > >>>>> neutron router-create router neutron net-create private > >>>>> neutron subnet-create private 10.42.200.0/24 > > > >>>>> > >>>> --name private-subnet > >>>>> neutron router-interface-add router private-subnet neutron > >>>>> net-create public --router:external=True neutron > >>>>> subnet-create public 10.42.100.0/24 > >>>>> > > > >> > >>>> --name public-subnet > >>>> --enable_dhcp=False --allocation-pool > >>>> start=10.42.100.100,end=10.42.100.200 --no-gateway > >>>>> neutron router-gateway-set router public > >>>>> > >>>>> I'm able to launch instances but I can't get access > >>>>> (ping/ssh) to > >>>> them. > >>>>> > >>>>> I don't know if I'm doing something wrong starting from > >>>>> planning. > >>>>> > >>>>> Please, help me! > >>>>> > >>>>> _______________________________________________ Rdo-list > >>>>> mailing list Rdo-list at redhat.com > >>>>> >>>>> > > >> > > >> > >>> > > > > >> > > >>> > >>>>> https://www.redhat.com/mailman/listinfo/rdo-list > >>>>> > >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > >> > > > >>> > > >> > >> > >>>> > > >> > > > >>> > > >> > >>> > > > > > > > > > >>>> -- Pasquale Salza > > > >>>> e-mail: pasquale.salza at gmail.com > >>>> > >>>> >>>> > > >>>> >>>> > >>>> >>>> >> > >>> >>> > >>> >>> > > >>> >>> > >>> >>> >>> > >>>> phone: +39 393 4415978 > > > >> fax: +39 089 > >>> 8422939 skype: pasquale.salza > >>>> linkedin: http://it.linkedin.com/in/psalza/ > > > > > >>>> _______________________________________________ Rdo-list > >>>> mailing list Rdo-list at redhat.com > >>>> > > > > >> > > >> > >>>> https://www.redhat.com/mailman/listinfo/rdo-list > > > >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > >> > > > >>> > > >> > >> > > > > > >>> Those look like the iptables rule on the hypervisor. Rhys is > >>> talking about the Neutron security group rules. By default, > >>> ssh into VMs is not allowed. You need to permit ICMP and SSH in > >>> the security rules on the neutron network. > > > >>> I don't see anything wrong with your network architecture at > >>> first glance, but floating IPs can be tricky at first. Start > >>> with basic VM-to-VM connectivity and add on from there. > > > >>> Good luck! > > > > > >>> _______________________________________________ Rdo-list > >>> mailing list Rdo-list at redhat.com > > > > >> > > >> > >>> https://www.redhat.com/mailman/listinfo/rdo-list > > > >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > >> > > > >>> > > >> > >> > > > >> That sounds like it should work, but one of those 6 IP addresses > >> will need to be used for the Neutron router (that IP will be > >> used for SNAT for VMs that have no floating IP). > > > >> I'm not sure what you mean when you say "I'd like to reserve 6 > >> IPs for 6 VMs I could instanciate on OpenStack." You can > >> instantiate more than one VM on each compute node, and if you > >> have 6 compute nodes then depending on size you could have dozens > >> of VMs. Maybe you just mean you could instantiate 6 VMs with > >> public IPs? Actually, due to the router IP, you would be limited > >> to 5. > > > >> Make sure you add the floating IP network as an external net. > >> Since your router will not be taking the .1 address, you will > >> need to create the port by hand with the chosen IP and add it to > >> the router. > > > >> $ neutron net-create externalnet -- --router:external=True $ > >> neutron subnet-create externalnet 172.16.58.0/24 > > > >> --name external \ --enable_dhcp=False > >> --allocation_pool start=172.16.58.x,\ end=172.16.58.x --gateway > >> 172.16.58.x (use your network gateway here - change the IP > >> addresses in the allocation range to match what is available on > >> your network) $ neutron router-create extrouter (name of your > >> router) $ neutron port-create externalnet --fixed-ip 172.16.58.x > >> (use desired router IP) $ neutron router-interface-add extrouter > >> port=$portid (port id from previous command) $ neutron > >> router-interface-add extrouter subnet=public (replace public > >> with the name of the 192.168.0.0/24 > > network) > > > >> Once that is done, you should be able to assign a floating IP to > >> any VM that has an interface on the 192.168.0.0/24 > > > >> network. > > > >> P.S. - Several times in your email you mentioned 192.168.0.0/16 > > > >> , but that's not a valid network. I > >> assume you mean 192.168.0.0/24 > > . > > > > > > > > That depends what you are trying to do. There are plenty of > > reasons why it might not work at first. You may need to > > troubleshoot. > > > > One issue that might come up is that you will be doing multiple > > levels of NAT. Some protocols won't work with multiple layers of > > translation. > > > > If your goal is to eventually make these VMs reachable from the > > Internet, there are a lot of factors in play above the OpenStack > > cloud. > > > > > > No, the external network is only attached to the Neutron controller. > The public IP actually lives on the l3agent, which runs the router you > created and attached to that network. When traffic goes back and forth > from outside, the l3agent does source NAT and swaps the public IP with > the VM IP. The controller isn't actually attached to the external network. > > In general, the only IPs in use on the External network are the IP you > assign to the router attached to the External network, the upstream > gateway router, and the floating IPs handled by Neutron. > > If a VM doesn't have a floating IP, the Neutron router will use its > own IP address for the NAT. That Internet access is outbound-only. > > - -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | @dxs on twitter > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1 > > iQEcBAEBAgAGBQJU6Y1bAAoJEFkV3ypsGNbjTIMH/iajE5q30wfCKcghkWaTu0AW > VckXJyPSdtucrewUb+oUriGFx3OPMZU1hnGxCYqDTjsj/iTx3JsSFzCozmKzdXAY > hWEO/nNmD4lWljWghjTac13t+6rhM5lJVA3posQoZEPWwyrdh6bmcHwCM93HYZ3H > QYaXv7RKasSool6Kq9MxOyRq2+O0DvmVWk8BOKHzy2ZnP1OrRjhotSRIRIh1O3Ti > 3PEYZJ+QZOzxAMfWDWcRjNONuGscaIVvPxrU5/i6jH5FK1ymJarIRJmVPO1a58BW > cYEcsuz/L6wYhaYthRCY14EkLQ7bsSTT4JMse68s0/u3WgQPyjZOR2NBk6QAAu8= > =0N5i > -----END PGP SIGNATURE----- > -- Pasquale Salza e-mail: pasquale.salza at gmail.com phone: +39 393 4415978 fax: +39 089 8422939 skype: pasquale.salza linkedin: http://it.linkedin.com/in/psalza/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Feb 23 15:00:02 2015 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 23 Feb 2015 15:00:02 +0000 (UTC) Subject: [Rdo-list] [Fedocal] Reminder meeting : RDO packaging meeting Message-ID: <20150223150002.EAE2E6052AF7@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO packaging meeting on 2015-02-25 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO packaging irc meeting ([agenda](https://etherpad.openstack.org/p/RDO-Packaging)) Every week on #rdo on freenode Source: https://apps.fedoraproject.org//calendar//meeting/2017/ From kfiresmith at gmail.com Mon Feb 23 16:37:06 2015 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Mon, 23 Feb 2015 11:37:06 -0500 Subject: [Rdo-list] Oslo heartbeat timeout *only* for Windows guests Message-ID: Folks, I can launch Cirros and RHEL cloud images just fine, but when I try to launch a Windows server 2012 image I created following the image guide exactly, the instantiation fails out like so: http://paste.openstack.org/show/180646/ Seems odd for it to raise an oslo heartbeat timeout - I'm not getting any other timeouts on the management network whatsoever. Anyone see this before? Thanks! - Kodiak -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at rhcloud.com Mon Feb 23 19:21:56 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Mon, 23 Feb 2015 14:21:56 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 49 - Fixed! In-Reply-To: <2663997.23.1424632266684.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> References: <2663997.23.1424632266684.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> Message-ID: <23522472.25.1424719316562.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 49 - Fixed: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/49/ to view the results. From apevec at gmail.com Mon Feb 23 21:07:51 2015 From: apevec at gmail.com (Alan Pevec) Date: Mon, 23 Feb 2015 22:07:51 +0100 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> Message-ID: 2015-02-23 15:45 GMT+01:00 Martin M?gr : > commit bcd5832875dd401029aa6bab0c73a5cd205986d8 > Author: Martin M?gr > Date: Mon Feb 23 15:45:09 2015 +0100 > > Initial Kilo release The obviously badly communicated idea was to develop packaging for Kilo in Delorean repos in case of Packstack that's https://github.com/openstack-packages/packstack/tree/f20-master (ignore misnamed branch name, it will be renamed to generic rpm-master) Benefit is that you automatically get packages for each upstream commit so you can make adjustment more rapidly and import to Rawhide only after Kilo-3 milestone when upstream starts to stabilize. Cheers, Alan From Yaniv.Kaul at emc.com Mon Feb 23 22:36:18 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Mon, 23 Feb 2015 17:36:18 -0500 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> Message-ID: <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> Anything is better than Devstack... > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Alan Pevec > Sent: Monday, February 23, 2015 11:08 PM > To: Martin M?gr > Cc: Rdo-list at redhat.com > Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release > > 2015-02-23 15:45 GMT+01:00 Martin M?gr : > > commit bcd5832875dd401029aa6bab0c73a5cd205986d8 > > Author: Martin M?gr > > Date: Mon Feb 23 15:45:09 2015 +0100 > > > > Initial Kilo release > > The obviously badly communicated idea was to develop packaging for Kilo in > Delorean repos in case of Packstack that's https://github.com/openstack- > packages/packstack/tree/f20-master > (ignore misnamed branch name, it will be renamed to generic rpm-master) > Benefit is that you automatically get packages for each upstream commit so > you can make adjustment more rapidly and import to Rawhide only after Kilo-3 > milestone when upstream starts to stabilize. > > Cheers, > Alan > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From kchamart at redhat.com Tue Feb 24 08:23:01 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 24 Feb 2015 09:23:01 +0100 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> Message-ID: <20150224082301.GM30296@tesla.redhat.com> On Mon, Feb 23, 2015 at 05:36:18PM -0500, Kaul, Yaniv wrote: > Anything is better than Devstack... Hmm, most (80%) of my test environment is via DevStack and I find it a huge time saver. Probably I just got used to it, I find it extremely quick (after the first run) to setup/tear-down environments -- just about 5 minutes or less. I know people running multi-node DevStack environments for rapid testing as well. :-) -- /kashyap From Yaniv.Kaul at emc.com Tue Feb 24 08:52:18 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 24 Feb 2015 03:52:18 -0500 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <20150224082301.GM30296@tesla.redhat.com> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> <20150224082301.GM30296@tesla.redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03CA018641@MX19A.corp.emc.com> > -----Original Message----- > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > Sent: Tuesday, February 24, 2015 10:23 AM > To: Kaul, Yaniv > Cc: Alan Pevec; Martin M?gr; Rdo-list at redhat.com > Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release > > On Mon, Feb 23, 2015 at 05:36:18PM -0500, Kaul, Yaniv wrote: > > Anything is better than Devstack... > > Hmm, most (80%) of my test environment is via DevStack and I find it a huge > time saver. Probably I just got used to it, I find it extremely quick (after the first > run) to setup/tear-down environments -- just about 5 minutes or less. I know > people running multi-node DevStack environments for rapid testing as well. :-) > > -- > /Kashyap By definition, pulling every single component from its latest greatest upstream means it cannot be stable. In my specific case, it failed on heat - which I don't care about and is not very useful to my work. I've tried disabling it (by adding 'disable_service heat h-api h-api-cfn h-api-cw h-eng' to my localrc) and then things broke even worse. Re-trying... Here's my local.conf: [[local|localrc]] ADMIN_PASSWORD=123456 DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d54 #FIXED_RANGE=172.31.1.0/24 #FLOATING_RANGE=192.168.20.0/25 HOST_IP=10.103.233.161 CINDER_ENABLED_BACKENDS=xio_gold:xtremio_1 [[post-config|$CINDER_CONF]] [DEFAULT] rpc_response_timeout=600 service_down_time=600 volume_name_template = CI-%s enabled_backends=xtremio_1 default_volume_type=xtremio_1 [xtremio_1] volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver san_ip=vxms-xbrickdrm168 san_login=admin san_password=admin volume_backend_name = xtremio_1 Y. From kchamart at redhat.com Tue Feb 24 09:12:06 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 24 Feb 2015 10:12:06 +0100 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <648473255763364B961A02AC3BE1060D03CA018641@MX19A.corp.emc.com> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> <20150224082301.GM30296@tesla.redhat.com> <648473255763364B961A02AC3BE1060D03CA018641@MX19A.corp.emc.com> Message-ID: <20150224091206.GN30296@tesla.redhat.com> On Tue, Feb 24, 2015 at 03:52:18AM -0500, Kaul, Yaniv wrote: > > -----Original Message----- > > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > > Sent: Tuesday, February 24, 2015 10:23 AM > > To: Kaul, Yaniv > > Cc: Alan Pevec; Martin M?gr; Rdo-list at redhat.com > > Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release > > > > On Mon, Feb 23, 2015 at 05:36:18PM -0500, Kaul, Yaniv wrote: > > > Anything is better than Devstack... > > > > Hmm, most (80%) of my test environment is via DevStack and I find it a huge > > time saver. Probably I just got used to it, I find it extremely quick (after the first > > run) to setup/tear-down environments -- just about 5 minutes or less. I know > > people running multi-node DevStack environments for rapid testing as well. :-) > > > > -- > > /Kashyap > > By definition, pulling every single component from its latest greatest > upstream means it cannot be stable. To avoid that, you can check out stable release of DevStack, which will inturn use only stable branches of the other OpenStack projects DevStack>$ git checkout remotes/origin/stable/juno > In my specific case, it failed on heat - which I don't care about and > is not very useful to my work. Also, to avoid issues like that the 'ENABLED_SERVICES' bit in local.conf is important. You can just add the components that you use, it's been rock-solid for me that way. I use only components that I care about and absolutely nothing else -- Nova, Glance, Neutron and Keystone. ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta [NOTE: I also disable the 'n-cert' (Nova cert) service - which is only for EC2, and is slated to be removed upstream.] Since you care about Cinder too, so you can just add Cinder specific services in the ENABLED_SERVICES along with the above. That's the localrc conf file I use: https://kashyapc.fedorapeople.org/virt/openstack/2-minimal_devstack_localrc.conf Added benefit with the above config for me is also smaller footprint inside DevStack VM (with a single Nova instance running, I have about 1.3 GB of mem usage) https://kashyapc.fedorapeople.org/virt/openstack/heuristics/Memory-profiling-inside-DevStack.txt You can compare what you see in your env by running the same $ ps_mem (as root). To install: $ yum install ps_mem > I've tried disabling it (by adding 'disable_service heat h-api h-api-cfn h-api-cw h-eng' to my localrc) and then things broke even worse. > Re-trying... > > Here's my local.conf: > [[local|localrc]] > ADMIN_PASSWORD=123456 > DATABASE_PASSWORD=$ADMIN_PASSWORD > RABBIT_PASSWORD=$ADMIN_PASSWORD > SERVICE_PASSWORD=$ADMIN_PASSWORD > SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d54 > #FIXED_RANGE=172.31.1.0/24 > #FLOATING_RANGE=192.168.20.0/25 > HOST_IP=10.103.233.161 > CINDER_ENABLED_BACKENDS=xio_gold:xtremio_1 > > [[post-config|$CINDER_CONF]] > [DEFAULT] > rpc_response_timeout=600 > service_down_time=600 > volume_name_template = CI-%s > enabled_backends=xtremio_1 > default_volume_type=xtremio_1 > [xtremio_1] > volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver > san_ip=vxms-xbrickdrm168 > san_login=admin > san_password=admin > volume_backend_name = xtremio_1 > > > Y. -- /kashyap From mmagr at redhat.com Tue Feb 24 09:30:31 2015 From: mmagr at redhat.com (=?UTF-8?B?TWFydGluIE3DoWdy?=) Date: Tue, 24 Feb 2015 10:30:31 +0100 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> Message-ID: <54EC44B7.90203@redhat.com> On 02/23/2015 10:07 PM, Alan Pevec wrote: > 2015-02-23 15:45 GMT+01:00 Martin M?gr : >> commit bcd5832875dd401029aa6bab0c73a5cd205986d8 >> Author: Martin M?gr >> Date: Mon Feb 23 15:45:09 2015 +0100 >> >> Initial Kilo release > The obviously badly communicated idea was to develop packaging for > Kilo in Delorean repos > in case of Packstack that's > https://github.com/openstack-packages/packstack/tree/f20-master > (ignore misnamed branch name, it will be renamed to generic rpm-master) > Benefit is that you automatically get packages for each upstream > commit so you can make adjustment more rapidly and import to Rawhide > only after Kilo-3 milestone when upstream starts to stabilize. > > Cheers, > Alan There is already f22 branch in koji: [para at sanitarium openstack-packstack]$ git branch -r origin/HEAD -> origin/master origin/el6 origin/el6-grizzly origin/el6-havana origin/el6-icehouse origin/f17 origin/f18 origin/f19 origin/f20 origin/f21 origin/f22 origin/icehouse-epel7 origin/master [para at sanitarium openstack-packstack]$ So having regular Kilo packstack build won't harm anything ... please correct me if I'm wrong. Regards, Martin -- Martin M?gr Openstack Red Hat Czech IRC nick: mmagr / para Internal channels: #brno, #packstack, #rhos-dev, #rhos-users Freenode channels: #openstack-dev, #packstack-dev, #puppet-openstack, #rdo From mohammed.arafa at gmail.com Fri Feb 20 22:12:44 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 20 Feb 2015 17:12:44 -0500 Subject: [Rdo-list] I can't get access to VM instances In-Reply-To: References: <5000D3CF-6FC2-4A52-B1FC-76BC8843F540@redhat.com> Message-ID: taken from https://github.com/marafa/openstack/blob/master/openstack-project-add.sh write_security_rules(){ echo "todo: use neutron secgroup to add ssh and ping rules instead of nova" source $ks_dir/keystonerc_$user$id nova keypair-add key$id > $ks_dir/key$id.pem chmod 600 $ks_dir/key$id.pem nova secgroup-create SecGrp$id "Security Group $id" nova secgroup-add-rule SecGrp$id tcp 22 22 0.0.0.0/0 neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 1 --port_range_max 65535 SecGrp$id neutron security-group-rule-create --direction ingress --protocol udp --port_range_min 1 --port_range_max 65535 SecGrp$id neutron security-group-rule-create --direction ingress --protocol icmp SecGrp$id } On Fri, Feb 20, 2015 at 5:07 PM, Pasquale Salza wrote: > Hi Rhys, > I suppose so, because these are my iptables rules: > > iptables -F > iptables -t nat -F > iptables -P INPUT ACCEPT > iptables -P OUTPUT ACCEPT > iptables -P FORWARD ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -m state --state ESTABLISHED,RELATED > -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport ssh -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport www -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport pptp -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p tcp --sport domain -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p tcp --dport domain -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p udp --sport domain -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p udp --dport domain -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p gre -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -p icmp -j ACCEPT > iptables -A INPUT -d 172.16.58.0/24 -j DROP > iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE > service iptables save > > Firstly, do you think I planned the network organisation well? Do you have > other suggestion (best practices) with 2 interfaces? > > > 2015-02-20 18:30 GMT+01:00 Rhys Oxenham : > >> Hi Pasquale, >> >> Did you modify your security group rules to allow ICMP and/or 22:tcp >> access? >> >> Many thanks >> Rhys >> >> > On 20 Feb 2015, at 17:11, Pasquale Salza >> wrote: >> > >> > Hi there, I have a lot of problems with RDO/OpenStack configuration. >> Firstly, I need to describe my network situation. >> > >> > I have 7 machine, each of them with 2 NIC. I would like to use one >> machine as a controller/network node and the others as compute nodes. >> > >> > I would like to use the eth0 to connect nodes to internet (and get >> access by remote sessions) with the network "172.16.58.0/24", in which I >> have just 7 available IPs, and eth1 as configuration network on the network >> 10.42.100.0/42. >> > >> > This is my current configuration, for each node (varying the IPs on >> each machine): >> > >> > eth0: >> > DEVICE=eth0 >> > TYPE=Ethernet >> > ONBOOT=yes >> > BOOTPROTO=static >> > IPADDR=172.16.58.50 >> > NETMASK=255.255.255.0 >> > GATEWAY=172.16.58.254 >> > DNS1=172.16.58.50 >> > DOMAIN=### >> > DEFROUTE="yes" >> > >> > eth1: >> > DEVICE=eth1 >> > TYPE=OVSPort >> > DEVICETYPE=ovs >> > OVS_BRIDGE=br-ex >> > ONBOOT=yes >> > >> > br-ex: >> > DEVICE=br-ex >> > DEVICETYPE=ovs >> > TYPE=OVSBridge >> > BOOTPROTO=static >> > IPADDR=10.42.100.1 >> > NETMASK=255.255.255.0 >> > ONBOOT=yes >> > >> > I'd like to have instances on 10.42.200.0/24 virtual private network >> and the remaining IPs of 10.42.100.0/24 network as floating IPs. >> > >> > These are the relevant parts of my answers.txt file: >> > >> > CONFIG_CONTROLLER_HOST=10.42.100.1 >> > >> CONFIG_COMPUTE_HOSTS=10.42.100.10,10.42.100.11,10.42.100.12,10.42.100.13,10.42.100.14,10.42.100.15 >> > CONFIG_NETWORK_HOSTS=10.42.100.1 >> > CONFIG_AMQP_HOST=10.42.100.1 >> > CONFIG_MARIADB_HOST=10.42.100.1 >> > CONFIG_NOVA_COMPUTE_PRIVIF=eth1 >> > CONFIG_NOVA_NETWORK_PUBIF=eth1 >> > CONFIG_NOVA_NETWORK_PRIVIF=eth1 >> > CONFIG_NOVA_NETWORK_FIXEDRANGE=10.42.200.0/24 >> > CONFIG_NOVA_NETWORK_FLOATRANGE=10.42.100.0/24 >> > CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex >> > CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan >> > CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan >> > CONFIG_NEUTRON_ML2_VNI_RANGES=10:100 >> > CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS= >> > CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= >> > CONFIG_NEUTRON_OVS_BRIDGE_IFACES= >> > CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1 >> > >> > After the installation, I configure the network like this: >> > >> > neutron router-create router >> > neutron net-create private >> > neutron subnet-create private 10.42.200.0/24 --name private-subnet >> > neutron router-interface-add router private-subnet >> > neutron net-create public --router:external=True >> > neutron subnet-create public 10.42.100.0/24 --name public-subnet >> --enable_dhcp=False --allocation-pool start=10.42.100.100,end=10.42.100.200 >> --no-gateway >> > neutron router-gateway-set router public >> > >> > I'm able to launch instances but I can't get access (ping/ssh) to them. >> > >> > I don't know if I'm doing something wrong starting from planning. >> > >> > Please, help me! >> > >> > _______________________________________________ >> > Rdo-list mailing list >> > Rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > > > -- > Pasquale Salza > > e-mail: pasquale.salza at gmail.com > phone: +39 393 4415978 > fax: +39 089 8422939 > skype: pasquale.salza > linkedin: http://it.linkedin.com/in/psalza/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Sat Feb 21 02:42:44 2015 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 20 Feb 2015 21:42:44 -0500 Subject: [Rdo-list] staypuft - unable to install on centos6 Message-ID: so i am experimenting with staypuft trying to learn how to install it i have set up a VM with the latest centos6 and then added the foreman repository and installed the staypuft-installer it then fails but doesnt give me an error output - there is nothing in the logs. whatsoever i have are my inputs. pls advise on how to get staypuft installed yum -y install http://yum.theforeman.org/releases/latest/el6/x86_64/foreman-release.rpm yum -y install foreman-installer-staypuft staypuft-installer --foreman-plugin-discovery-install-images=true ifconfig #confirm ip vi /etc/hosts #make an entry for this host hostname -f #verify staypuft-installer --foreman-plugin-discovery-install-images=true cat /var/log/foreman-installer/foreman-installer.log |grep -i error ___ [root at staypuft ~]# hostname -f staypuft.marafa.vm [root at staypuft ~]# staypuft-installer --foreman-plugin-discovery-install-images=true Networking setup: Network interface: 'eth0' IP address: '10.0.1.2' Network mask: '255.255.255.0' Network address: '10.0.1.0' Host Gateway: '10.0.1.1' DHCP range start: '10.0.1.3' DHCP range end: '10.0.1.254' DHCP Gateway: '10.0.1.2' DNS forwarder: '8.8.8.7' Domain: 'marafa.vm' Foreman URL: 'https://staypuft.marafa.vm' NTP sync host: '1.centos.pool.ntp.org' Timezone: 'UTC' Configure networking on this machine: ? Configure firewall on this machine: ? The installer can configure the networking and firewall rules on this machine with the above configuration. Default values are populated from the this machine's existing networking configuration. If you DO NOT want to configure networking please set 'Configure networking on this machine' to No before proceeding. Do this by selecting option 'Do not configure networking' from the list below. How would you like to proceed?: 1. Proceed with the above values 2. Change Network interface 3. Change IP address 4. Change Network mask 5. Change Network address 6. Change Host Gateway 7. Change DHCP range start 8. Change DHCP range end 9. Change DHCP Gateway 10. Change DNS forwarder 11. Change Domain 12. Change Foreman URL 13. Change NTP sync host 14. Change Timezone 15. Do not configure networking 16. Do not configure firewall 17. Cancel Installation 1 Configure client authentication SSH public key: '' Root password: '*******************************************' Please set a default root password for newly provisioned machines. If you choose not to set a password, it will be generated randomly. The password must be a minimum of 8 characters. You can also set a public ssh key which will be deployed to newly provisioned machines. How would you like to proceed?: 1. Proceed with the above values 2. Change SSH public key 3. Change Root password 4. Toggle Root password visibility 3 new value for root password ******** enter new root password again to confirm ******** Configure client authentication SSH public key: '' Root password: '********' Please set a default root password for newly provisioned machines. If you choose not to set a password, it will be generated randomly. The password must be a minimum of 8 characters. You can also set a public ssh key which will be deployed to newly provisioned machines. How would you like to proceed?: 1. Proceed with the above values 2. Change SSH public key 3. Change Root password 4. Toggle Root password visibility 4 Configure client authentication SSH public key: '' Root password: 'password' Please set a default root password for newly provisioned machines. If you choose not to set a password, it will be generated randomly. The password must be a minimum of 8 characters. You can also set a public ssh key which will be deployed to newly provisioned machines. How would you like to proceed?: 1. Proceed with the above values 2. Change SSH public key 3. Change Root password 4. Toggle Root password visibility 1 Starting networking setup Networking setup has finished Preparing installation Done Not running provisioning configuration since installation encountered errors, exit code was 1 Something went wrong! Check the log for ERROR-level output * Foreman is running at https://staypuft.marafa.vm Initial credentials are admin / ZZVBfQ3WLAwnpHJH * Foreman Proxy is running at https://staypuft.marafa.vm:8443 * Puppetmaster is running at port 8140 The full log is at /var/log/foreman-installer/foreman-installer.log Something went wrong! Check the log for ERROR-level output The full log is at /var/log/foreman-installer/foreman-installer.log [root at staypuft ~]# cat /var/log/foreman-installer/foreman-installer.log |grep -i error [root at staypuft ~]# -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Tue Feb 24 17:29:10 2015 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 24 Feb 2015 17:29:10 +0000 Subject: [Rdo-list] [RDO] Blog roundup, February 24 2015 Message-ID: <0000014bbca2a409-3c10e728-3f24-4076-8bb9-b0a88ee66794-000000@email.amazonses.com> rbowen started a discussion. Blog roundup, February 24 2015 --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/1003/blog-roundup-february-24-2015 Have a great day! From apevec at gmail.com Tue Feb 24 17:58:16 2015 From: apevec at gmail.com (Alan Pevec) Date: Tue, 24 Feb 2015 18:58:16 +0100 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <54EC44B7.90203@redhat.com> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> <54EC44B7.90203@redhat.com> Message-ID: > So having regular Kilo packstack build won't harm anything ... please > correct me if I'm wrong. You're correct, it won't harm anything. Just that updates could've been done early, even before f22 was branched, in github/openstack-packages/packstack and we'd probably have working Packstack for Kilo already. Proposed packaging flow for openstack packages is: Delorean rpm-master -> cherry pick to Rawhide when next Fedora is branched -> RDO builds (Fedora in Koji, EL in CBS[*] Cloud SIG) from the same spec in RC phase i.e. Delorean is to be treated as packaging upstream Cheers, Alan [*] http://wiki.centos.org/HowTos/CommunityBuildSystem From rdo-info at redhat.com Tue Feb 24 18:41:37 2015 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 24 Feb 2015 18:41:37 +0000 Subject: [Rdo-list] [RDO] How to build RedHat 100 node Openstack lab on AWS Message-ID: <0000014bbce4fa71-102ad5cf-c691-410a-b042-1e85e0ddba95-000000@email.amazonses.com> rbowen started a discussion. How to build RedHat 100 node Openstack lab on AWS --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/1004/how-to-build-redhat-100-node-openstack-lab-on-aws Have a great day! From rbowen at redhat.com Tue Feb 24 19:38:02 2015 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 24 Feb 2015 14:38:02 -0500 Subject: [Rdo-list] RDO/OpenStack meetups coming up (February 23, 2015) Message-ID: <54ECD31A.9060205@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If there's a meetup in your area, please consider attending. It's the best way to find out what interesting things are going on in the larger community, and a great way to make contacts that will help you solve your own problems in the future. --Rich * Tuesday, February 24 in Mountain View, CA, US: Online Meetup: Application Orchestration on VMware Integrated OpenStack w/ TOSCA - http://www.meetup.com/Cloud-Online-Meetup/events/220273066/ * Tuesday, February 24 in Durham, NC, US: [ONLINE ONLY] February Meetup: How to get the most out of Cinder Block Storage - http://www.meetup.com/Triangle-OpenStack-Meetup/events/220417856/ * Tuesday, February 24 in Toronto, ON, CA: February! OpenStack Networking with Pluribus and Keystone 101 - http://www.meetup.com/OpenStackTO/events/220337066/ * Tuesday, February 24 in Bielefeld, DE: 4. OpenStack Cloud Computing Stammtisch OWL - http://www.meetup.com/OpenStack-Cloud-Computing-Stammtisch-OWL/events/220322422/ * Tuesday, February 24 in Tel Aviv-Yafo, IL: Online Meetup: Application Orchestration on VMware Integrated OpenStack w/ TOSCA - http://www.meetup.com/IGTCloud/events/220498519/ * Tuesday, February 24 in Atlanta, GA, US: Dive into OpenShift! - http://www.meetup.com/Atlanta-Red-Hat-User-Group/events/220040380/ * Wednesday, February 25 in Charleston, SC, US: Application Performance Management (APM) w/ AppDynamics & OpenShift - http://www.meetup.com/Charleston-Red-Hat-User-Group/events/220558937/ * Wednesday, February 25 in Mountain View, CA, US: What's New in GlusterFS - http://www.meetup.com/GlusterFS-Silicon-Valley/events/215281682/ * Thursday, February 26 in Budapest, HU: OpenStack 2015 feb - http://www.meetup.com/OpenStack-Hungary-Meetup-Group/events/220145750/ * Thursday, February 26 in Palo Alto, CA, US: Scaling OpenStack (100 nodes) using lab on AWS - http://www.meetup.com/SF-Bay-Area-Systems-Engineers-meetup/events/220518045/ * Thursday, February 26 in Littleton, CO, US: Discuss and Learn about OpenStack - http://www.meetup.com/OpenStack-Denver/events/220437297/ * Thursday, February 26 in Henrico, VA, US: OpenStack Richmond Meetup #2 - http://www.meetup.com/OpenStack-Richmond/events/219940260/ * Thursday, February 26 in Los Angeles, CA, US: Monthly Data Center Tour and Discussion on Hybrid Environments - http://www.meetup.com/LA-OC-Data-Center-Tour-Public-Private-Cloud-Discussions/events/219154709/ * Thursday, February 26 in Herriman, UT, US: Using SaltStack with OpenStack - http://www.meetup.com/openstack-utah/events/220158218/ * Friday, February 27 in Versailles, FR: Hackaton - Eureka - http://www.meetup.com/Versailles-Cloud-based-Social-Media-Meetup/events/219826836/ * Saturday, February 28 in Bangalore, IN: GlusterFS Bangalore Meetup - http://www.meetup.com/glusterfs-India/events/220384138/ * Monday, March 02 in Seattle, WA, US: OpenStack Seattle Meetup: Neutron: Past, Present and Future - http://www.meetup.com/OpenStack-Seattle/events/198405862/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From no-reply at rhcloud.com Tue Feb 24 20:45:50 2015 From: no-reply at rhcloud.com (address not configured yet) Date: Tue, 24 Feb 2015 15:45:50 -0500 (EST) Subject: [Rdo-list] khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 50 - Failure! Message-ID: <2015613.27.1424810751294.JavaMail.524ee18d4382ec1886000084@ex-med-node17.prod.rhcloud.com> khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal - Build # 50 - Failure: Check console output at https://prod-rdojenkins.rhcloud.com/job/khaleesi-rdo-juno-production-fedora-21-aio-packstack-neutron-ml2-vxlan-rabbitmq-tempest-rpm-minimal/50/ to view the results. From Jan.van.Eldik at cern.ch Wed Feb 25 10:31:35 2015 From: Jan.van.Eldik at cern.ch (Jan van Eldik) Date: Wed, 25 Feb 2015 11:31:35 +0100 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54E36A33.2020907@redhat.com> References: <54E36A33.2020907@redhat.com> Message-ID: <54EDA487.5000704@cern.ch> Hi Rich, all, > Anyways, I've put the latest version of the source file for the bookmark > at https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt and > would appreciate any feedback from any of you who have opinions > regarding what should/should not be on there, what changes we need to > make, and how we can generally make it more useful to users of RDO and > OpenStack in general. What about replacing the nova, glance, keystone, cinder, ... commands by the equivalent "openstack" commands, so that "nova list" becomes "openstack server list" etc? If people agree taht this is a good idea, I would be happy to compile the list of commands. cheers, Jan From madko77 at gmail.com Wed Feb 25 13:43:13 2015 From: madko77 at gmail.com (Madko) Date: Wed, 25 Feb 2015 13:43:13 +0000 Subject: [Rdo-list] failed to flow_del References: <54E31181.4050305@redhat.com> Message-ID: Hi, RedHat told us that it's fixed with package openvswitch-2.1.2-2.el7_0.1. Any news on the RDO side? Is it fine to go production with this warnings? Any ETA when this package will be in RDO repository? Best regards. Le Tue Feb 17 2015 at 11:31:02, Madko a ?crit : > Thanks Ihar, I've just reported this as you suggested: > https://bugzilla.redhat.com/show_bug.cgi?id=1193429 > > > > Le Tue Feb 17 2015 at 11:03:23, Ihar Hrachyshka a > ?crit : > > -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 02/16/2015 03:39 PM, Madko wrote: >> > Hi, >> > >> > we have a lot of flood in the logs about "failed to flow_del". >> > >> > /var/log/openvswitch/ovs-vswitchd.log-20150215:2015-02-13T17 >> :43:54.555Z|00011|dpif(revalidator_7)|WARN|system at ovs-system: >> > >> > >> failed to flow_del (No such file or directory) >> > skb_priority(0),in_port(6),skb_mark(0),eth(src=00:1a:a0:28: >> ca:cc,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=4001 >> ,pcp=0),encap(eth_type(0x0806),arp(sip=10.156.29.184,tip=10. >> 156.20.110,op=1,sha=00:1a:a0:28:ca:cc,tha=00:00:00:00:00:00)) >> > >> > Seems ubuntu has same bug : >> > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1408972 >> > >> > We are using kernel 3.10.0-123.20.1.el7.x86_64 and >> > openvswitch-2.1.2-2.el7.centos.1.x86_64 >> > >> > Is it a known bug on RDO? >> >> Brief search thru the web shows that it's a known issue and was fixed >> in OVS as of https://github.com/openvswitch/ovs/commit/3601bd879 that >> is included in 2.1.3. I guess the commit was not backported to EL7 >> package, hence the error. >> >> You can report a bug against RDO to track the backport and/or version >> bump for OVS. >> >> /Ihar >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1 >> >> iQEcBAEBAgAGBQJU4xF+AAoJEC5aWaUY1u57XcEH/jXVuW/rjkUe0Wiie5twK/sH >> 3gtAjsRctG9QQt3K4YqkzFGmZwgsyJ2GLn5tHRcPbg6cdFySWuSfjDnumiqi7fDN >> n7g4LmGyvbaYdf0JM295DYCGkTj2tgkJ4+uW/pbrQ2vG6itfLWvWKdbbRoEyFpiL >> PdaDZnHGFfIuvG5HQ1tK8DyKSUN/aUXSxQctXp81K+ltf71Ae+muH/WlWW3wvE2I >> EYr49oj/tKvp3qZDy/idCOEOoEEIedSxlXT3WzqyHn42RLFPUf4eOjQQ6RFNoPYx >> m+xfgKtAdFcvEV4/EuZn6UzY2vY9RtdchshCaLylU28GsDqybiC+PUzkDMjJK8M= >> =Mui0 >> -----END PGP SIGNATURE----- >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Wed Feb 25 15:54:01 2015 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 25 Feb 2015 10:54:01 -0500 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54EDA487.5000704@cern.ch> References: <54E36A33.2020907@redhat.com> <54EDA487.5000704@cern.ch> Message-ID: <54EDF019.4050108@redhat.com> On 02/25/2015 05:31 AM, Jan van Eldik wrote: > Hi Rich, all, > >> Anyways, I've put the latest version of the source file for the bookmark >> at https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt and >> would appreciate any feedback from any of you who have opinions >> regarding what should/should not be on there, what changes we need to >> make, and how we can generally make it more useful to users of RDO and >> OpenStack in general. > > What about replacing the nova, glance, keystone, cinder, ... commands > by the equivalent "openstack" commands, so that "nova list" becomes > "openstack server list" etc? > > If people agree taht this is a good idea, I would be happy to compile > the list of commands. That would be extremely helpful. Thanks. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From Yaniv.Kaul at emc.com Thu Feb 26 08:56:19 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 26 Feb 2015 03:56:19 -0500 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <20150224091206.GN30296@tesla.redhat.com> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> <20150224082301.GM30296@tesla.redhat.com> <648473255763364B961A02AC3BE1060D03CA018641@MX19A.corp.emc.com> <20150224091206.GN30296@tesla.redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03CA01892C@MX19A.corp.emc.com> Eventually made it working. For the benefit of all, here's how I've done it, include bizarre workarounds. 1. The Jenkins job is a shell script, essentially SSH'ing to the node and running a script. SSH="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o GlobalKnownHostsFile=/dev/null" ${SSH} root@${CONTROLLER} "/home/public/scripts/devstack.sh" echo "Running stack!" ${SSH} root@${CONTROLLER} "su - stack -c \"cd /opt/stack ; git config --global url.http://git.openstack.org/.insteadOf git://git.openstack.org/ ; ./stack.sh\"" ${SSH} root@${CONTROLLER} "firewall-cmd --add-service http" 2. The /home/public/scripts/devstack.sh script: firewall-cmd --add-service http || true git config --global url.http://git.openstack.org/.insteadOf git://git.openstack.org/ cd /opt git clone https://git.openstack.org/openstack-dev/devstack /opt/devstack/tools/create-stack-user.sh chown -R stack:stack /opt/devstack/ mv /opt/devstack /opt/stack cat << 'EOF' >> local.conf EOF sed -i 's/Defaults requiretty/#Defaults requiretty/' /etc/sudoers mv local.conf stack chown stack:stack stack/local.conf Y. > -----Original Message----- > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > Sent: Tuesday, February 24, 2015 11:12 AM > To: Kaul, Yaniv > Cc: Alan Pevec; Martin M?gr; Rdo-list at redhat.com > Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release > > On Tue, Feb 24, 2015 at 03:52:18AM -0500, Kaul, Yaniv wrote: > > > -----Original Message----- > > > From: Kashyap Chamarthy [mailto:kchamart at redhat.com] > > > Sent: Tuesday, February 24, 2015 10:23 AM > > > To: Kaul, Yaniv > > > Cc: Alan Pevec; Martin M?gr; Rdo-list at redhat.com > > > Subject: Re: [Rdo-list] [openstack-packstack] Initial Kilo release > > > > > > On Mon, Feb 23, 2015 at 05:36:18PM -0500, Kaul, Yaniv wrote: > > > > Anything is better than Devstack... > > > > > > Hmm, most (80%) of my test environment is via DevStack and I find it > > > a huge time saver. Probably I just got used to it, I find it > > > extremely quick (after the first > > > run) to setup/tear-down environments -- just about 5 minutes or > > > less. I know people running multi-node DevStack environments for > > > rapid testing as well. :-) > > > > > > -- > > > /Kashyap > > > > By definition, pulling every single component from its latest greatest > > upstream means it cannot be stable. > > To avoid that, you can check out stable release of DevStack, which will inturn > use only stable branches of the other OpenStack projects > > DevStack>$ git checkout remotes/origin/stable/juno > > > In my specific case, it failed on heat - which I don't care about and > > is not very useful to my work. > > Also, to avoid issues like that the 'ENABLED_SERVICES' bit in local.conf is > important. > > You can just add the components that you use, it's been rock-solid for me that > way. I use only components that I care about and absolutely nothing else -- > Nova, Glance, Neutron and Keystone. > > ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n- > cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta > > [NOTE: I also disable the 'n-cert' (Nova cert) service - which is > only for EC2, and is slated to be removed upstream.] > > Since you care about Cinder too, so you can just add Cinder specific services in > the ENABLED_SERVICES along with the above. > > That's the localrc conf file I use: > > https://kashyapc.fedorapeople.org/virt/openstack/2- > minimal_devstack_localrc.conf > > Added benefit with the above config for me is also smaller footprint inside > DevStack VM (with a single Nova instance running, I have about > 1.3 GB of mem usage) > > https://kashyapc.fedorapeople.org/virt/openstack/heuristics/Memory- > profiling-inside-DevStack.txt > > You can compare what you see in your env by running the same $ ps_mem (as > root). To install: $ yum install ps_mem > > > I've tried disabling it (by adding 'disable_service heat h-api h-api-cfn h-api-cw > h-eng' to my localrc) and then things broke even worse. > > Re-trying... > > > > Here's my local.conf: > > [[local|localrc]] > > ADMIN_PASSWORD=123456 > > DATABASE_PASSWORD=$ADMIN_PASSWORD > > RABBIT_PASSWORD=$ADMIN_PASSWORD > > SERVICE_PASSWORD=$ADMIN_PASSWORD > > SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d54 > > #FIXED_RANGE=172.31.1.0/24 > > #FLOATING_RANGE=192.168.20.0/25 > > HOST_IP=10.103.233.161 > > CINDER_ENABLED_BACKENDS=xio_gold:xtremio_1 > > > > [[post-config|$CINDER_CONF]] > > [DEFAULT] > > rpc_response_timeout=600 > > service_down_time=600 > > volume_name_template = CI-%s > > enabled_backends=xtremio_1 > > default_volume_type=xtremio_1 > > [xtremio_1] > > volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver > > san_ip=vxms-xbrickdrm168 > > san_login=admin > > san_password=admin > > volume_backend_name = xtremio_1 > > > > > > Y. > > -- > /kashyap From kchamart at redhat.com Thu Feb 26 13:07:04 2015 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 26 Feb 2015 14:07:04 +0100 Subject: [Rdo-list] [openstack-packstack] Initial Kilo release In-Reply-To: <648473255763364B961A02AC3BE1060D03CA01892C@MX19A.corp.emc.com> References: <20150223144524.D362E1149014A@pkgs02.phx2.fedoraproject.org> <648473255763364B961A02AC3BE1060D03CA018605@MX19A.corp.emc.com> <20150224082301.GM30296@tesla.redhat.com> <648473255763364B961A02AC3BE1060D03CA018641@MX19A.corp.emc.com> <20150224091206.GN30296@tesla.redhat.com> <648473255763364B961A02AC3BE1060D03CA01892C@MX19A.corp.emc.com> Message-ID: <20150226130704.GC5996@tesla.redhat.com> On Thu, Feb 26, 2015 at 03:56:19AM -0500, Kaul, Yaniv wrote: [. . .] > 2. The /home/public/scripts/devstack.sh script: > > firewall-cmd --add-service http || true > > git config --global url.http://git.openstack.org/.insteadOf git://git.openstack.org/ > cd /opt > git clone https://git.openstack.org/openstack-dev/devstack > /opt/devstack/tools/create-stack-user.sh > chown -R stack:stack /opt/devstack/ > mv /opt/devstack /opt/stack For the above, I normally use $HOME instead of /opt, which has DevStack & git repos. The below snippet in local.conf will take care of it: . . . DEST=$HOME/src/cloud DATA_DIR=$DEST/data SERVICE_DIR=$DEST/status SCREEN_LOGDIR=$DATA_DIR/logs/ LOGFILE=/$DATA_DIR/logs/devstacklog.txt . . . NOTE: The above assumes you've already done this: $ chmod go+rx $HOME -- /kashyap From Jan.van.Eldik at cern.ch Thu Feb 26 14:15:11 2015 From: Jan.van.Eldik at cern.ch (Jan van Eldik) Date: Thu, 26 Feb 2015 15:15:11 +0100 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54EDF019.4050108@redhat.com> References: <54E36A33.2020907@redhat.com> <54EDA487.5000704@cern.ch> <54EDF019.4050108@redhat.com> Message-ID: <54EF2A6F.5040100@cern.ch> Hi, >> If people agree taht this is a good idea, I would be happy to compile >> the list of commands. > > That would be extremely helpful. Thanks. Voila. A second pair of eyes to review would be good. cheers, Jan -------------- next part -------------- glance image-create --name \ openstack image create \ --is-public True --disk-format qcow2 \ --public --disk-format qcow2 \ --container-format ovf \ --container-format ovf \ --file \ --file \ --property os_distro=[fedora|ubuntu|...] --property os_distro=[fedora|ubuntu|...] ----------------------- Creating a new key pair: nova keypair-add mykey > mykey.pem openstack keypair create mykey > mykey.pem Uploading a pre-existing key: nova keypair-add --pub-key mykey.pub mykey openstack keypair create --public-key mykey.pub mykey List available SSH keys: nova keypair-list openstack keypair list Managing users and tenants List tenants/users/roles: List projects/users/roles: keystone tenant-list / user-list / role-list openstack {project,user,role} list Add new user: keystone user-create --name \ openstack user create \ --tenant-id --pass --project --password Grant role to user: keystone user-role-add --user-id \ --role-id --tenant-id --project ----------------------- Show available services: nova-manage service list (XXX to be deleted?) nova service-list Show running instances: nova list openstack server list Start a new instance: nova boot \ openstack server create \ --flavor \ --flavor \ --image \ --image \ --key-name --key-name List flavours: nova flavor-list openstack flavor list Migrate an instance to a different host: nova live-migration openstack server migrate --live Reboot instance: nova reboot openstack server reboot Destroy instance: nova delete openstack server delete ----------------------- Managing volumes Create bootable volume from an image: cinder create --image-id \ openstack volume create \ --display-name --image --size Create a snapshot: cinder snapshot-create openstack snapshot create \ --name List available volumes: nova volume-list / cinder list openstack volume list Booting an instance from a volume: nova boot openstack server create \ --flavor --key-name \ --flavor --key-name \ --block_device_mapping \ --image \ =:[snap]::0 --block-device-mapping = Creating a new volume: cinder create --display-name openstack volume create --size Attach a volume to an instance: nova volume-attach \ openstack server add volume From Yaniv.Kaul at emc.com Thu Feb 26 16:22:17 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 26 Feb 2015 11:22:17 -0500 Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? Message-ID: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> Is master or Juno more updated? I'm afraid both are not updated to remove the XML tests which are already gone in upstream. Are there any plans to contribute tempest-config upstream? (probably the only reason I still use RH's fork?). Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Feb 26 16:34:50 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 26 Feb 2015 11:34:50 -0500 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54EF2A6F.5040100@cern.ch> References: <54E36A33.2020907@redhat.com> <54EDA487.5000704@cern.ch> <54EDF019.4050108@redhat.com> <54EF2A6F.5040100@cern.ch> Message-ID: <54EF4B2A.9010808@redhat.com> Thank you! This is awesome. On 02/26/2015 09:15 AM, Jan van Eldik wrote: > Hi, > >>> If people agree taht this is a good idea, I would be happy to compile >>> the list of commands. >> >> That would be extremely helpful. Thanks. > > Voila. A second pair of eyes to review would be good. > > cheers, Jan > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From florian at hastexo.com Thu Feb 26 16:46:55 2015 From: florian at hastexo.com (Florian Haas) Date: Thu, 26 Feb 2015 17:46:55 +0100 Subject: [Rdo-list] astapor: using nova_host for vncserver_host is problematic Message-ID: Hi everyone, in https://github.com/redhat-openstack/astapor/blob/b11e159d/puppet/modules/quickstack/manifests/compute_common.pp#L204, the nova_host global variable is used to set vncserver_host on the ::nova::compute class (which then uses it to construct the novncproxy_base_url config option for nova.conf). That's a bit problematic if your nova_host is an IP address on the private management network, because that IP address will then show up as the result of "nova get-vnc-console novnc" and also pop up in Horizon, but will typically not be reachable from the outside. If instead you configure your nova_host for the compute nodes to the *public* IP (or hostname) of your Nova API endpoint, then nova get-vnc-console returns, and Horizon contains, a publicly available URL. However, because https://github.com/redhat-openstack/astapor/blob/b11e159d/puppet/modules/quickstack/manifests/neutron/compute.pp#L108 also uses nova_host, then that means that in a Neutron-based environment, python-neutronclient invoked by nova-compute will now *also* try to contact Nova via the public IP, which may not be reachable from inside the management network (and even if it is, that detour would still seem ugly to me). So my proposal would be to introduce a separate "vncserver_host" parameter that defaults to nova_host, such that nova_host can be set to the private IP and vncserver_host to the public hostname. Does that make sense? Cheers, Florian From tkammer at redhat.com Thu Feb 26 17:21:28 2015 From: tkammer at redhat.com (Tal Kammer) Date: Thu, 26 Feb 2015 12:21:28 -0500 (EST) Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? In-Reply-To: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> Message-ID: <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Is master or Juno more updated? > I?m afraid both are not updated to remove the XML tests which are already > gone in upstream. Yaniv, I believe you have an outdated local branch.. tkammer at tkammer redhat-tempest$ git status # On branch master # Untracked files: # (use "git add ..." to include in what will be committed) nothing added to commit but untracked files present (use "git add" to track) tkammer at tkammer redhat-tempest$ git log | grep 'removes the xml' -C6 commit e5e7a50909d3e91e5e98e851ae764fe897eca648 Author: Matthew Treinish Date: Wed Nov 26 11:00:10 2014 -0500 Remove unused xml config options This patch removes the xml configuration options from tempest. Since the xml testing has all been removed from tempest these options no longer do anything, so let's just remove them from config. Change-Id: I5b3e221d942e09134024b82acaf179dc869357e0 If you prefer the GUI version (for example), you can see this also on github that the XML is not part of the master branch anymore: https://github.com/redhat-openstack/tempest/tree/master/tempest/services/identity/v3 https://github.com/redhat-openstack/tempest/tree/juno/tempest/services/identity/v3 > Are there any plans to contribute tempest-config upstream? (probably the only > reason I still use RH?s fork?). > Y. Yes, David Kranz initiated a push of the tool upstream. In any case, we have implemented an automation system around the sync with upstream branch. As of about two weeks ago, our master branch will always be in sync with upstream master (maybe a couple of days behind as we still in the process of deciding if the update should happen daily/weekly/etc). > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -- -- Tal Kammer Automation and infra Team Lead, Openstack platform. Red Hat Israel From tkammer at redhat.com Thu Feb 26 17:24:25 2015 From: tkammer at redhat.com (Tal Kammer) Date: Thu, 26 Feb 2015 12:24:25 -0500 (EST) Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? In-Reply-To: <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> References: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> Message-ID: <1236802628.26616045.1424971465358.JavaMail.zimbra@redhat.com> ----- Original Message ----- > ----- Original Message ----- > > > Is master or Juno more updated? > > I?m afraid both are not updated to remove the XML tests which are already > > gone in upstream. > > Yaniv, I believe you have an outdated local branch.. > > tkammer at tkammer redhat-tempest$ git status > # On branch master > # Untracked files: > # (use "git add ..." to include in what will be committed) > nothing added to commit but untracked files present (use "git add" to track) > > tkammer at tkammer redhat-tempest$ git log | grep 'removes the xml' -C6 > commit e5e7a50909d3e91e5e98e851ae764fe897eca648 > Author: Matthew Treinish > Date: Wed Nov 26 11:00:10 2014 -0500 > > Remove unused xml config options > > This patch removes the xml configuration options from tempest. Since > the xml testing has all been removed from tempest these options no > longer do anything, so let's just remove them from config. > > Change-Id: I5b3e221d942e09134024b82acaf179dc869357e0 Sorry, just noticed I've quoted the wrong commit :) tkammer at tkammer redhat-tempest$ git log | grep 'remove xml_utils and all things that depend on it' -C6 commit e4119b664dca51f0d055553fcf540921b90186ae Merge: d354961 fc07254 Author: Jenkins Date: Wed Nov 26 17:23:17 2014 +0000 Merge "remove xml_utils and all things that depend on it" commit d354961d1c427f0690a7571998ac4121449da280 Merge: 2d01ff3 f3c7591 Author: Jenkins Date: Wed Nov 26 17:23:06 2014 +0000 -- Merge "Unified interface for ScenarioTest and NetworkScenarioTest" commit fc072542073b3e3611854aad41364c08b03c5e83 Author: Sean Dague Date: Mon Nov 24 11:50:25 2014 -0500 remove xml_utils and all things that depend on it This rips out xml_utils, and all the things that depend on it, which takes out a huge amount of the xml infrastructure in the process. Change-Id: I9d40f3065e007a531985da1ed56ef4f2e245912e > > If you prefer the GUI version (for example), you can see this also on github > that the XML is not part of the master branch anymore: > https://github.com/redhat-openstack/tempest/tree/master/tempest/services/identity/v3 > https://github.com/redhat-openstack/tempest/tree/juno/tempest/services/identity/v3 > > > Are there any plans to contribute tempest-config upstream? (probably the > > only > > reason I still use RH?s fork?). > > Y. > > Yes, David Kranz initiated a push of the tool upstream. > In any case, we have implemented an automation system around the sync with > upstream branch. > As of about two weeks ago, our master branch will always be in sync with > upstream master (maybe a couple of days behind as we still in the process of > deciding if the update should happen daily/weekly/etc). > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > > -- > Tal Kammer > Automation and infra Team Lead, Openstack platform. > Red Hat Israel > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Tal Kammer Automation and infra Team Lead, Openstack platform. Red Hat Israel From rbowen at redhat.com Thu Feb 26 17:41:46 2015 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 26 Feb 2015 12:41:46 -0500 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54E36A33.2020907@redhat.com> References: <54E36A33.2020907@redhat.com> Message-ID: <54EF5ADA.4080501@redhat.com> On 02/17/2015 11:20 AM, Rich Bowen wrote: > Some of you have seen the RDO bookmarks that I hand out at various > events. They were designed 2 years ago by Dave Neary, and have been > updated a few times since then. It's time for another refresh to reflect > the changes in the OpenStack world. > > I'm also planning to remove the bit of the bookmark that lists project > names and definitions, since that was already somewhat out of date, and > is even more so given Thierry's blog post from yesterday. > > Anyways, I've put the latest version of the source file for the bookmark > at https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt and > would appreciate any feedback from any of you who have opinions > regarding what should/should not be on there, what changes we need to > make, and how we can generally make it more useful to users of RDO and > OpenStack in general. Based on comments received on list and offlist, I've updated https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt and would appreciate a few more sets of eyes on it before we send it off to the printer. Thanks! --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From Yaniv.Kaul at emc.com Thu Feb 26 18:07:26 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 26 Feb 2015 13:07:26 -0500 Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? In-Reply-To: <1236802628.26616045.1424971465358.JavaMail.zimbra@redhat.com> References: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> <1236802628.26616045.1424971465358.JavaMail.zimbra@redhat.com> Message-ID: <648473255763364B961A02AC3BE1060D03CA018A2F@MX19A.corp.emc.com> > -----Original Message----- > From: Tal Kammer [mailto:tkammer at redhat.com] > Sent: Thursday, February 26, 2015 7:24 PM > To: Kaul, Yaniv > Cc: rdo-list at redhat.com; Yaniv Eylon > Subject: Re: [Rdo-list] What RH Tempest is best to be used against Kilo? > > > > ----- Original Message ----- > > ----- Original Message ----- > > > > > Is master or Juno more updated? > > > I?m afraid both are not updated to remove the XML tests which are > > > already gone in upstream. > > > > Yaniv, I believe you have an outdated local branch.. Every test pulls from github, I don't maintain them here. https://github.com/redhat-openstack/tempest/tree/juno/tempest/services/volume/xml certainly has XML - and that's Juno branch. So I assume the answer is 'master'. OK - will try and use that. I already see failures there - that worked in the Juno branch... :( Thanks, Y. > > > > tkammer at tkammer redhat-tempest$ git status # On branch master # > > Untracked files: > > # (use "git add ..." to include in what will be committed) > > nothing added to commit but untracked files present (use "git add" to > > track) > > > > tkammer at tkammer redhat-tempest$ git log | grep 'removes the xml' -C6 > > commit e5e7a50909d3e91e5e98e851ae764fe897eca648 > > Author: Matthew Treinish > > Date: Wed Nov 26 11:00:10 2014 -0500 > > > > Remove unused xml config options > > > > This patch removes the xml configuration options from tempest. Since > > the xml testing has all been removed from tempest these options no > > longer do anything, so let's just remove them from config. > > > > Change-Id: I5b3e221d942e09134024b82acaf179dc869357e0 > > Sorry, just noticed I've quoted the wrong commit :) tkammer at tkammer > redhat-tempest$ git log | grep 'remove xml_utils and all things that depend on > it' -C6 > > commit e4119b664dca51f0d055553fcf540921b90186ae > Merge: d354961 fc07254 > Author: Jenkins > Date: Wed Nov 26 17:23:17 2014 +0000 > > Merge "remove xml_utils and all things that depend on it" > > commit d354961d1c427f0690a7571998ac4121449da280 > Merge: 2d01ff3 f3c7591 > Author: Jenkins > Date: Wed Nov 26 17:23:06 2014 +0000 > > -- > Merge "Unified interface for ScenarioTest and NetworkScenarioTest" > > commit fc072542073b3e3611854aad41364c08b03c5e83 > Author: Sean Dague > Date: Mon Nov 24 11:50:25 2014 -0500 > > remove xml_utils and all things that depend on it > > This rips out xml_utils, and all the things that depend on it, which takes > out a huge amount of the xml infrastructure in the process. > > Change-Id: I9d40f3065e007a531985da1ed56ef4f2e245912e > > > > > > If you prefer the GUI version (for example), you can see this also on > > github that the XML is not part of the master branch anymore: > > https://github.com/redhat-openstack/tempest/tree/master/tempest/servic > > es/identity/v3 > > https://github.com/redhat-openstack/tempest/tree/juno/tempest/services > > /identity/v3 > > > > > Are there any plans to contribute tempest-config upstream? (probably > > > the only reason I still use RH?s fork?). > > > Y. > > > > Yes, David Kranz initiated a push of the tool upstream. > > In any case, we have implemented an automation system around the sync > > with upstream branch. > > As of about two weeks ago, our master branch will always be in sync > > with upstream master (maybe a couple of days behind as we still in the > > process of deciding if the update should happen daily/weekly/etc). > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > > > > -- > > Tal Kammer > > Automation and infra Team Lead, Openstack platform. > > Red Hat Israel > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > Tal Kammer > Automation and infra Team Lead, Openstack platform. > Red Hat Israel From dkranz at redhat.com Thu Feb 26 18:39:03 2015 From: dkranz at redhat.com (David Kranz) Date: Thu, 26 Feb 2015 13:39:03 -0500 Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? In-Reply-To: <648473255763364B961A02AC3BE1060D03CA018A2F@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> <1236802628.26616045.1424971465358.JavaMail.zimbra@redhat.com> <648473255763364B961A02AC3BE1060D03CA018A2F@MX19A.corp.emc.com> Message-ID: <54EF6847.3000208@redhat.com> On 02/26/2015 01:07 PM, Kaul, Yaniv wrote: >> -----Original Message----- >> From: Tal Kammer [mailto:tkammer at redhat.com] >> Sent: Thursday, February 26, 2015 7:24 PM >> To: Kaul, Yaniv >> Cc: rdo-list at redhat.com; Yaniv Eylon >> Subject: Re: [Rdo-list] What RH Tempest is best to be used against Kilo? >> >> >> >> ----- Original Message ----- >>> ----- Original Message ----- >>> >>>> Is master or Juno more updated? >>>> I?m afraid both are not updated to remove the XML tests which are >>>> already gone in upstream. >>> Yaniv, I believe you have an outdated local branch.. > Every test pulls from github, I don't maintain them here. > https://github.com/redhat-openstack/tempest/tree/juno/tempest/services/volume/xml certainly has XML - and that's Juno branch. > So I assume the answer is 'master'. OK - will try and use that. I already see failures there - that worked in the Juno branch... :( > > Thanks, > Y. The master branch has not been tested yet with a wide variety of configurations. That is in process now. Tracebacks from failures would help get things fixed quick. -David >>> tkammer at tkammer redhat-tempest$ git status # On branch master # >>> Untracked files: >>> # (use "git add ..." to include in what will be committed) >>> nothing added to commit but untracked files present (use "git add" to >>> track) >>> >>> tkammer at tkammer redhat-tempest$ git log | grep 'removes the xml' -C6 >>> commit e5e7a50909d3e91e5e98e851ae764fe897eca648 >>> Author: Matthew Treinish >>> Date: Wed Nov 26 11:00:10 2014 -0500 >>> >>> Remove unused xml config options >>> >>> This patch removes the xml configuration options from tempest. Since >>> the xml testing has all been removed from tempest these options no >>> longer do anything, so let's just remove them from config. >>> >>> Change-Id: I5b3e221d942e09134024b82acaf179dc869357e0 >> Sorry, just noticed I've quoted the wrong commit :) tkammer at tkammer >> redhat-tempest$ git log | grep 'remove xml_utils and all things that depend on >> it' -C6 >> >> commit e4119b664dca51f0d055553fcf540921b90186ae >> Merge: d354961 fc07254 >> Author: Jenkins >> Date: Wed Nov 26 17:23:17 2014 +0000 >> >> Merge "remove xml_utils and all things that depend on it" >> >> commit d354961d1c427f0690a7571998ac4121449da280 >> Merge: 2d01ff3 f3c7591 >> Author: Jenkins >> Date: Wed Nov 26 17:23:06 2014 +0000 >> >> -- >> Merge "Unified interface for ScenarioTest and NetworkScenarioTest" >> >> commit fc072542073b3e3611854aad41364c08b03c5e83 >> Author: Sean Dague >> Date: Mon Nov 24 11:50:25 2014 -0500 >> >> remove xml_utils and all things that depend on it >> >> This rips out xml_utils, and all the things that depend on it, which takes >> out a huge amount of the xml infrastructure in the process. >> >> Change-Id: I9d40f3065e007a531985da1ed56ef4f2e245912e >> >> >>> If you prefer the GUI version (for example), you can see this also on >>> github that the XML is not part of the master branch anymore: >>> https://github.com/redhat-openstack/tempest/tree/master/tempest/servic >>> es/identity/v3 >>> https://github.com/redhat-openstack/tempest/tree/juno/tempest/services >>> /identity/v3 >>> >>>> Are there any plans to contribute tempest-config upstream? (probably >>>> the only reason I still use RH?s fork?). >>>> Y. >>> Yes, David Kranz initiated a push of the tool upstream. >>> In any case, we have implemented an automation system around the sync >>> with upstream branch. >>> As of about two weeks ago, our master branch will always be in sync >>> with upstream master (maybe a couple of days behind as we still in the >>> process of deciding if the update should happen daily/weekly/etc). >>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> -- >>> >>> -- >>> Tal Kammer >>> Automation and infra Team Lead, Openstack platform. >>> Red Hat Israel >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> -- >> Tal Kammer >> Automation and infra Team Lead, Openstack platform. >> Red Hat Israel > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Yaniv.Kaul at emc.com Thu Feb 26 19:17:35 2015 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Thu, 26 Feb 2015 14:17:35 -0500 Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? In-Reply-To: <648473255763364B961A02AC3BE1060D03CA018A2F@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> <1236802628.26616045.1424971465358.JavaMail.zimbra@redhat.com> <648473255763364B961A02AC3BE1060D03CA018A2F@MX19A.corp.emc.com> Message-ID: <648473255763364B961A02AC3BE1060D03CA018A35@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Kaul, Yaniv > Sent: Thursday, February 26, 2015 8:07 PM > To: Tal Kammer > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] What RH Tempest is best to be used against Kilo? > > > -----Original Message----- > > From: Tal Kammer [mailto:tkammer at redhat.com] > > Sent: Thursday, February 26, 2015 7:24 PM > > To: Kaul, Yaniv > > Cc: rdo-list at redhat.com; Yaniv Eylon > > Subject: Re: [Rdo-list] What RH Tempest is best to be used against Kilo? > > > > > > > > ----- Original Message ----- > > > ----- Original Message ----- > > > > > > > Is master or Juno more updated? > > > > I?m afraid both are not updated to remove the XML tests which are > > > > already gone in upstream. > > > > > > Yaniv, I believe you have an outdated local branch.. > > Every test pulls from github, I don't maintain them here. > https://github.com/redhat- > openstack/tempest/tree/juno/tempest/services/volume/xml certainly has XML > - and that's Juno branch. > So I assume the answer is 'master'. OK - will try and use that. I already see > failures there - that worked in the Juno branch... :( > > Thanks, > Y. Changes (intended?) - config_tempest doesn't create a [volume] section, as it used to. That means my 'sed' (instead of openstack-config) was not working, moved to 'echo'. - 'Backups' become 'backups' in volume-feature-enabled in etc/tempest.conf. I'm removing it, as I don't test backup in Cinder. - I can't seem to NOT run the backup test (tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete[gate,smoke]) although: [root at lg528 ~]# grep -ci backup tempest/etc/tempest.conf 0 The rest is fine. Y. > > > > > > > tkammer at tkammer redhat-tempest$ git status # On branch master # > > > Untracked files: > > > # (use "git add ..." to include in what will be committed) > > > nothing added to commit but untracked files present (use "git add" > > > to > > > track) > > > > > > tkammer at tkammer redhat-tempest$ git log | grep 'removes the xml' -C6 > > > commit e5e7a50909d3e91e5e98e851ae764fe897eca648 > > > Author: Matthew Treinish > > > Date: Wed Nov 26 11:00:10 2014 -0500 > > > > > > Remove unused xml config options > > > > > > This patch removes the xml configuration options from tempest. Since > > > the xml testing has all been removed from tempest these options no > > > longer do anything, so let's just remove them from config. > > > > > > Change-Id: I5b3e221d942e09134024b82acaf179dc869357e0 > > > > Sorry, just noticed I've quoted the wrong commit :) tkammer at tkammer > > redhat-tempest$ git log | grep 'remove xml_utils and all things that > > depend on it' -C6 > > > > commit e4119b664dca51f0d055553fcf540921b90186ae > > Merge: d354961 fc07254 > > Author: Jenkins > > Date: Wed Nov 26 17:23:17 2014 +0000 > > > > Merge "remove xml_utils and all things that depend on it" > > > > commit d354961d1c427f0690a7571998ac4121449da280 > > Merge: 2d01ff3 f3c7591 > > Author: Jenkins > > Date: Wed Nov 26 17:23:06 2014 +0000 > > > > -- > > Merge "Unified interface for ScenarioTest and NetworkScenarioTest" > > > > commit fc072542073b3e3611854aad41364c08b03c5e83 > > Author: Sean Dague > > Date: Mon Nov 24 11:50:25 2014 -0500 > > > > remove xml_utils and all things that depend on it > > > > This rips out xml_utils, and all the things that depend on it, which takes > > out a huge amount of the xml infrastructure in the process. > > > > Change-Id: I9d40f3065e007a531985da1ed56ef4f2e245912e > > > > > > > > > > If you prefer the GUI version (for example), you can see this also > > > on github that the XML is not part of the master branch anymore: > > > https://github.com/redhat-openstack/tempest/tree/master/tempest/serv > > > ic > > > es/identity/v3 > > > https://github.com/redhat-openstack/tempest/tree/juno/tempest/servic > > > es > > > /identity/v3 > > > > > > > Are there any plans to contribute tempest-config upstream? > > > > (probably the only reason I still use RH?s fork?). > > > > Y. > > > > > > Yes, David Kranz initiated a push of the tool upstream. > > > In any case, we have implemented an automation system around the > > > sync with upstream branch. > > > As of about two weeks ago, our master branch will always be in sync > > > with upstream master (maybe a couple of days behind as we still in > > > the process of deciding if the update should happen daily/weekly/etc). > > > > > > > _______________________________________________ > > > > Rdo-list mailing list > > > > Rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > > > > > > -- > > > Tal Kammer > > > Automation and infra Team Lead, Openstack platform. > > > Red Hat Israel > > > > > > _______________________________________________ > > > Rdo-list mailing list > > > Rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > > Tal Kammer > > Automation and infra Team Lead, Openstack platform. > > Red Hat Israel > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dkranz at redhat.com Thu Feb 26 19:58:29 2015 From: dkranz at redhat.com (David Kranz) Date: Thu, 26 Feb 2015 14:58:29 -0500 Subject: [Rdo-list] What RH Tempest is best to be used against Kilo? In-Reply-To: <648473255763364B961A02AC3BE1060D03CA018A35@MX19A.corp.emc.com> References: <648473255763364B961A02AC3BE1060D03CA018A15@MX19A.corp.emc.com> <42927177.26612442.1424971288817.JavaMail.zimbra@redhat.com> <1236802628.26616045.1424971465358.JavaMail.zimbra@redhat.com> <648473255763364B961A02AC3BE1060D03CA018A2F@MX19A.corp.emc.com> <648473255763364B961A02AC3BE1060D03CA018A35@MX19A.corp.emc.com> Message-ID: <54EF7AE5.1090400@redhat.com> On 02/26/2015 02:17 PM, Kaul, Yaniv wrote: >> -----Original Message----- >> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On >> Behalf Of Kaul, Yaniv >> Sent: Thursday, February 26, 2015 8:07 PM >> To: Tal Kammer >> Cc: rdo-list at redhat.com >> Subject: Re: [Rdo-list] What RH Tempest is best to be used against Kilo? >> >>> -----Original Message----- >>> From: Tal Kammer [mailto:tkammer at redhat.com] >>> Sent: Thursday, February 26, 2015 7:24 PM >>> To: Kaul, Yaniv >>> Cc: rdo-list at redhat.com; Yaniv Eylon >>> Subject: Re: [Rdo-list] What RH Tempest is best to be used against Kilo? >>> >>> >>> >>> ----- Original Message ----- >>>> ----- Original Message ----- >>>> >>>>> Is master or Juno more updated? >>>>> I?m afraid both are not updated to remove the XML tests which are >>>>> already gone in upstream. >>>> Yaniv, I believe you have an outdated local branch.. >> Every test pulls from github, I don't maintain them here. >> https://github.com/redhat- >> openstack/tempest/tree/juno/tempest/services/volume/xml certainly has XML >> - and that's Juno branch. >> So I assume the answer is 'master'. OK - will try and use that. I already see >> failures there - that worked in the Juno branch... :( >> >> Thanks, >> Y. > Changes (intended?) > - config_tempest doesn't create a [volume] section, as it used to. That means my 'sed' (instead of openstack-config) was not working, moved to 'echo'. Yes, that was intended. config_tempest only creates entries when there is supposed to be a difference with the defaults on master, and this is no longer the case for volume. > - 'Backups' become 'backups' in volume-feature-enabled in etc/tempest.conf. I'm removing it, as I don't test backup in Cinder. This is due to a fix that came in from upstream master in whether the 'name' or 'alias' is used to label an extension so this is also expected. https://github.com/openstack/tempest/commit/54176ce7b1469158e43a60cda9c5382001cbd40f > - I can't seem to NOT run the backup test (tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete[gate,smoke]) although: > > [root at lg528 ~]# grep -ci backup tempest/etc/tempest.conf > 0 > > The rest is fine. Great, glad to hear. It should be clear that the contents of tempest.conf changes frequently and any code that is doing processing on the contents of that file is subject to breaking at various times. This is true whether using the red hat version or pure upstream. -David > Y. > >>>> tkammer at tkammer redhat-tempest$ git status # On branch master # >>>> Untracked files: >>>> # (use "git add ..." to include in what will be committed) >>>> nothing added to commit but untracked files present (use "git add" >>>> to >>>> track) >>>> >>>> tkammer at tkammer redhat-tempest$ git log | grep 'removes the xml' -C6 >>>> commit e5e7a50909d3e91e5e98e851ae764fe897eca648 >>>> Author: Matthew Treinish >>>> Date: Wed Nov 26 11:00:10 2014 -0500 >>>> >>>> Remove unused xml config options >>>> >>>> This patch removes the xml configuration options from tempest. Since >>>> the xml testing has all been removed from tempest these options no >>>> longer do anything, so let's just remove them from config. >>>> >>>> Change-Id: I5b3e221d942e09134024b82acaf179dc869357e0 >>> Sorry, just noticed I've quoted the wrong commit :) tkammer at tkammer >>> redhat-tempest$ git log | grep 'remove xml_utils and all things that >>> depend on it' -C6 >>> >>> commit e4119b664dca51f0d055553fcf540921b90186ae >>> Merge: d354961 fc07254 >>> Author: Jenkins >>> Date: Wed Nov 26 17:23:17 2014 +0000 >>> >>> Merge "remove xml_utils and all things that depend on it" >>> >>> commit d354961d1c427f0690a7571998ac4121449da280 >>> Merge: 2d01ff3 f3c7591 >>> Author: Jenkins >>> Date: Wed Nov 26 17:23:06 2014 +0000 >>> >>> -- >>> Merge "Unified interface for ScenarioTest and NetworkScenarioTest" >>> >>> commit fc072542073b3e3611854aad41364c08b03c5e83 >>> Author: Sean Dague >>> Date: Mon Nov 24 11:50:25 2014 -0500 >>> >>> remove xml_utils and all things that depend on it >>> >>> This rips out xml_utils, and all the things that depend on it, which takes >>> out a huge amount of the xml infrastructure in the process. >>> >>> Change-Id: I9d40f3065e007a531985da1ed56ef4f2e245912e >>> >>> >>>> If you prefer the GUI version (for example), you can see this also >>>> on github that the XML is not part of the master branch anymore: >>>> https://github.com/redhat-openstack/tempest/tree/master/tempest/serv >>>> ic >>>> es/identity/v3 >>>> https://github.com/redhat-openstack/tempest/tree/juno/tempest/servic >>>> es >>>> /identity/v3 >>>> >>>>> Are there any plans to contribute tempest-config upstream? >>>>> (probably the only reason I still use RH?s fork?). >>>>> Y. >>>> Yes, David Kranz initiated a push of the tool upstream. >>>> In any case, we have implemented an automation system around the >>>> sync with upstream branch. >>>> As of about two weeks ago, our master branch will always be in sync >>>> with upstream master (maybe a couple of days behind as we still in >>>> the process of deciding if the update should happen daily/weekly/etc). >>>> >>>>> _______________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> -- >>>> >>>> -- >>>> Tal Kammer >>>> Automation and infra Team Lead, Openstack platform. >>>> Red Hat Israel >>>> >>>> _______________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> -- >>> Tal Kammer >>> Automation and infra Team Lead, Openstack platform. >>> Red Hat Israel >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From christian at berendt.io Fri Feb 27 10:44:40 2015 From: christian at berendt.io (Christian Berendt) Date: Fri, 27 Feb 2015 11:44:40 +0100 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54EF5ADA.4080501@redhat.com> References: <54E36A33.2020907@redhat.com> <54EF5ADA.4080501@redhat.com> Message-ID: <54F04A98.60706@berendt.io> On 02/26/2015 06:41 PM, Rich Bowen wrote: > Based on comments received on list and offlist, I've updated > https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt and would > appreciate a few more sets of eyes on it before we send it off to the > printer. Show available services: nova service-list I still think this should be removed. Why is this a important command to manage instances? Creating a new volume: cinder create --display-name Should be converted to python-openstackclient, too. The alignment of the parameters should be unified. Examples for different alignments of parameters: openstack volume create \ --image --size openstack server migrate \ --live Maybe planet.openstack.org is an other useful URL that should be listed. HTH, Christian. From Jan.van.Eldik at cern.ch Fri Feb 27 11:59:37 2015 From: Jan.van.Eldik at cern.ch (Jan van Eldik) Date: Fri, 27 Feb 2015 12:59:37 +0100 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54F04A98.60706@berendt.io> References: <54E36A33.2020907@redhat.com> <54EF5ADA.4080501@redhat.com> <54F04A98.60706@berendt.io> Message-ID: <54F05C29.3030204@cern.ch> Hi, > I still think this should be removed. Why is this a important command to > manage instances? I agree with Chris. cheers, Jan From rbowen at redhat.com Fri Feb 27 16:32:55 2015 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 27 Feb 2015 11:32:55 -0500 Subject: [Rdo-list] RDO bookmarks - Feedback requested In-Reply-To: <54F05C29.3030204@cern.ch> References: <54E36A33.2020907@redhat.com> <54EF5ADA.4080501@redhat.com> <54F04A98.60706@berendt.io> <54F05C29.3030204@cern.ch> Message-ID: <54F09C37.3030804@redhat.com> On 02/27/2015 06:59 AM, Jan van Eldik wrote: > Hi, > >> I still think this should be removed. Why is this a important command to >> manage instances? > > I agree with Chris. Thanks again for your helpful feedback. I've got another iteration at the same place - https://openstack.redhat.com/images/bookmark/rdo_bookmark.odt - that addresses these comments. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/