From sgordon at redhat.com Mon Dec 1 15:49:31 2014 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 1 Dec 2014 10:49:31 -0500 (EST) Subject: [Rdo-list] Packstack on Juno In-Reply-To: <9E8EE5E176B2BD49913B2F69B369AD830210BF927F@MX02A.corp.emc.com> References: <9E8EE5E176B2BD49913B2F69B369AD830210BF927F@MX02A.corp.emc.com> Message-ID: <1622791697.24274981.1417448971606.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Brian Afshar" > To: "Ajay Kalambur (akalambu)" , rdo-list at redhat.com > > All packstack support can be found on docs.openstack.org. Hope this copy > provides information you need. > > Regards The docs.openstack.org guides actually focus on manual installation, they do not cover packstack (or other deployment automation tools). -Steve > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Ajay Kalambur (akalambu) > Sent: Tuesday, November 18, 2014 1:50 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Packstack on Juno > > Hi > Does packstack now support Juno if so where are the latest install > instructions? > Ajay > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From rbowen at rcbowen.com Mon Dec 1 19:26:49 2014 From: rbowen at rcbowen.com (Rich Bowen) Date: Mon, 01 Dec 2014 14:26:49 -0500 Subject: [Rdo-list] RDO hangouts: call for speakers Message-ID: <547CC0F9.3030807@rcbowen.com> I've neglected the Hangouts schedule for the last few months, largely due to a heavy travel schedule. I want to try to pick it up again and highlight some of the things that are happening in the Kilo development cycle. If you'd like to talk about your OpenStack work for 30-60 minutes, please let me know so that I can make up a schedule for the coming 2 or 3 months. I'd especially like to hear from people that are using RDO in your organization. I tend to think that what people are doing with OpenStack is generally more interesting than OpenStack itself. -- Rich Bowen - rbowen at rcbowen.com - @rbowen http://apachecon.com/ - @apachecon From dneary at redhat.com Tue Dec 2 15:39:03 2014 From: dneary at redhat.com (Dave Neary) Date: Tue, 02 Dec 2014 10:39:03 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? Message-ID: <547DDD17.8060302@redhat.com> Hi, I'm looking for ideas of mini PCs I can use for a small RDO cloud - looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 NICs is reasonable to ask for. My desired price point is *low* - all 3 for under $1000 would be ideal, failing that, as close to it as possible. Anyone have recommendations for hardware that would serve this process? Thanks, Dave. -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From kfiresmith at gmail.com Tue Dec 2 15:44:07 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Tue, 2 Dec 2014 10:44:07 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <547DDD17.8060302@redhat.com> References: <547DDD17.8060302@redhat.com> Message-ID: I recently did this using micro-ATX to save money over they shuttle / ITX form factor. I built around the cheapest low-wattage AMD quad-core APU I could find on Newegg with 8GB RAM on each box and slow 1tb spinning platters. I think I got down to about $340 / each for all components including PCI-e 2x1GB NICs I got second hand from a parts liquidator. If you are interested in specifics I can try to compile them after work. - Kodiak On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary wrote: > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > NICs is reasonable to ask for. My desired price point is *low* - all 3 > for under $1000 would be ideal, failing that, as close to it as possible. > > Anyone have recommendations for hardware that would serve this process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Tue Dec 2 15:45:54 2014 From: dneary at redhat.com (Dave Neary) Date: Tue, 02 Dec 2014 10:45:54 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: References: <547DDD17.8060302@redhat.com> Message-ID: <547DDEB2.7030509@redhat.com> Hi Kodiak, That sounds awesome! I would appreciate that, thank you. Regards, Dave. On 12/02/2014 10:44 AM, Kodiak Firesmith wrote: > I recently did this using micro-ATX to save money over they shuttle / > ITX form factor. I built around the cheapest low-wattage AMD quad-core > APU I could find on Newegg with 8GB RAM on each box and slow 1tb > spinning platters. I think I got down to about $340 / each for all > components including PCI-e 2x1GB NICs I got second hand from a parts > liquidator. If you are interested in specifics I can try to compile > them after work. > > - Kodiak > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary > wrote: > > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > NICs is reasonable to ask for. My desired price point is *low* - all 3 > for under $1000 would be ideal, failing that, as close to it as > possible. > > Anyone have recommendations for hardware that would serve this process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From madko77 at gmail.com Tue Dec 2 16:16:55 2014 From: madko77 at gmail.com (Madko) Date: Tue, 02 Dec 2014 16:16:55 +0000 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? References: <547DDD17.8060302@redhat.com> <547DDEB2.7030509@redhat.com> Message-ID: Take a look at some Proliant N54L or Proliant Gen8. Le Tue Dec 02 2014 at 16:47:18, Dave Neary a ?crit : > Hi Kodiak, > > That sounds awesome! > > I would appreciate that, thank you. > > Regards, > Dave. > > On 12/02/2014 10:44 AM, Kodiak Firesmith wrote: > > I recently did this using micro-ATX to save money over they shuttle / > > ITX form factor. I built around the cheapest low-wattage AMD quad-core > > APU I could find on Newegg with 8GB RAM on each box and slow 1tb > > spinning platters. I think I got down to about $340 / each for all > > components including PCI-e 2x1GB NICs I got second hand from a parts > > liquidator. If you are interested in specifics I can try to compile > > them after work. > > > > - Kodiak > > > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary > > wrote: > > > > Hi, > > > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > > NICs is reasonable to ask for. My desired price point is *low* - all > 3 > > for under $1000 would be ideal, failing that, as close to it as > > possible. > > > > Anyone have recommendations for hardware that would serve this > process? > > > > Thanks, > > Dave. > > > > -- > > Dave Neary - NFV/SDN Community Strategy > > Open Source and Standards, Red Hat - http://community.redhat.com > > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From James.Radtke at siriusxm.com Tue Dec 2 16:26:38 2014 From: James.Radtke at siriusxm.com (Radtke, James) Date: Tue, 2 Dec 2014 16:26:38 +0000 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <547DDEB2.7030509@redhat.com> References: <547DDD17.8060302@redhat.com> , <547DDEB2.7030509@redhat.com> Message-ID: <0D9F522988C72B48AD7045FCC7C2F3FE26E41DEE@PDGLMPEXCMBX01.corp.siriusxm.com> I did the same - and although it does not meet your budgetary requirements, it should give you an idea of what direction to look (even though I cannot recall specifics of my setup ;-) I have a 3 system "lab" (for testing oVirt/RHEV, Satellite, Clustering, OpenStack). SSDs were not necessary, but 7200rpm spindles became noticeably loud (as quiet as the rest of my LAB is). The additional NICs were also not *necessary* but provide a lot of flexibility. I believe you will not need as much storage as you have predicted. * "Controller" Case: (I don't recall) Power Supply: (I don't recall) Mobo: ASRock Z77E-ITX Proc: Intel i5-3570k Memory: 16GB (2 x 8GB) PCIe: quad-NIC Intel I350 HDD: Crucial ATA-M4 SSD 256G * "Compute" Mobo: Intel (I can't recall the model) Proc: Intel i5-3570k Memory: 8GB (2 x 4GB) PCIe: dual-NIC Intel HDD: Crucial ATA-M4 SSD 256G And some nice-to-haves... KVM and 2 desktop switches (I also have a Managed Switch for doing some multicast testing - and that thing makes my 700 sq ft loft sound like a data center :-( ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Dave Neary [dneary at redhat.com] Sent: Tuesday, December 02, 2014 10:45 AM To: Kodiak Firesmith Cc: rdo-list Subject: Re: [Rdo-list] Cheap, quiet hardware for a small RDO installation? Hi Kodiak, That sounds awesome! I would appreciate that, thank you. Regards, Dave. On 12/02/2014 10:44 AM, Kodiak Firesmith wrote: > I recently did this using micro-ATX to save money over they shuttle / > ITX form factor. I built around the cheapest low-wattage AMD quad-core > APU I could find on Newegg with 8GB RAM on each box and slow 1tb > spinning platters. I think I got down to about $340 / each for all > components including PCI-e 2x1GB NICs I got second hand from a parts > liquidator. If you are interested in specifics I can try to compile > them after work. > > - Kodiak > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary > wrote: > > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > NICs is reasonable to ask for. My desired price point is *low* - all 3 > for under $1000 would be ideal, failing that, as close to it as > possible. > > Anyone have recommendations for hardware that would serve this process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From shardy at redhat.com Tue Dec 2 16:48:25 2014 From: shardy at redhat.com (Steven Hardy) Date: Tue, 2 Dec 2014 16:48:25 +0000 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <547DDD17.8060302@redhat.com> References: <547DDD17.8060302@redhat.com> Message-ID: <20141202164824.GD28914@t430slt.redhat.com> On Tue, Dec 02, 2014 at 10:39:03AM -0500, Dave Neary wrote: > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > NICs is reasonable to ask for. My desired price point is *low* - all 3 > for under $1000 would be ideal, failing that, as close to it as possible. I've got an HP microserver (which have already been mentioned), and they're good if multiple drive bays is a requirement. Not especially quiet with multiple disks in it though (depends on your definition of quiet). Also the small form-factor desktops (I have a Dell Optiplex 7010) are good, they support up to 16G RAM, works well for a pre-built solution. If I were doing this now though, I'd be tempted to look at the Intel NUC boxes - they can only take one internal disk but are quite cheap and can take up to 16G RAM in some models. Very small too. Personally, I'd view 8G as the bare minimum, so the option to expand to (at least) 16G is probably a good idea, even if you don't fit that much initially. Steve From kchamart at redhat.com Tue Dec 2 16:58:14 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 2 Dec 2014 17:58:14 +0100 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <20141202164824.GD28914@t430slt.redhat.com> References: <547DDD17.8060302@redhat.com> <20141202164824.GD28914@t430slt.redhat.com> Message-ID: <20141202165814.GB4649@tesla.redhat.com> On Tue, Dec 02, 2014 at 04:48:25PM +0000, Steven Hardy wrote: > On Tue, Dec 02, 2014 at 10:39:03AM -0500, Dave Neary wrote: > > Hi, > > > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > > NICs is reasonable to ask for. My desired price point is *low* - all 3 > > for under $1000 would be ideal, failing that, as close to it as possible. > > I've got an HP microserver (which have already been mentioned), and they're > good if multiple drive bays is a requirement. Not especially quiet with > multiple disks in it though (depends on your definition of quiet). > > Also the small form-factor desktops (I have a Dell Optiplex 7010) are good, > they support up to 16G RAM, works well for a pre-built solution. > > If I were doing this now though, I'd be tempted to look at the Intel NUC > boxes - they can only take one internal disk but are quite cheap and can > take up to 16G RAM in some models. Very small too. Exactly, was about to mention this one. /me was considering it: http://www.intel.com/content/www/us/en/nuc/nuc-kit-d54250wyk.html But, I'm looking for one that has the newest Intel processor (Atleast "Haswell"). -- /kashyap From Jose_De_La_Rosa at dell.com Tue Dec 2 17:04:42 2014 From: Jose_De_La_Rosa at dell.com (Jose_De_La_Rosa at dell.com) Date: Tue, 2 Dec 2014 17:04:42 +0000 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <20141202165814.GB4649@tesla.redhat.com> References: <547DDD17.8060302@redhat.com> <20141202164824.GD28914@t430slt.redhat.com> <20141202165814.GB4649@tesla.redhat.com> Message-ID: Intel NUCs are a great option, are very quiet and since they are quite small, can fit almost anywhere. One built-in network controller though. I have an all-in-one Juno install at home with 8GB RAM and an i5 processor. Granted I am quite limited, but you can also opt for one with 16GB memory, an i7 processor and a ~250GB SSD for $500-600 at Amazon.com. Some models can hold 2 SSDs (one SATA and one mSATA) so you can have extra disk capacity if needed. -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Kashyap Chamarthy Sent: Tuesday, December 02, 2014 10:58 AM To: Steven Hardy Cc: rdo-list Subject: Re: [Rdo-list] Cheap, quiet hardware for a small RDO installation? On Tue, Dec 02, 2014 at 04:48:25PM +0000, Steven Hardy wrote: > On Tue, Dec 02, 2014 at 10:39:03AM -0500, Dave Neary wrote: > > Hi, > > > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if > > 2 NICs is reasonable to ask for. My desired price point is *low* - > > all 3 for under $1000 would be ideal, failing that, as close to it as possible. > > I've got an HP microserver (which have already been mentioned), and > they're good if multiple drive bays is a requirement. Not especially > quiet with multiple disks in it though (depends on your definition of quiet). > > Also the small form-factor desktops (I have a Dell Optiplex 7010) are > good, they support up to 16G RAM, works well for a pre-built solution. > > If I were doing this now though, I'd be tempted to look at the Intel > NUC boxes - they can only take one internal disk but are quite cheap > and can take up to 16G RAM in some models. Very small too. Exactly, was about to mention this one. /me was considering it: http://www.intel.com/content/www/us/en/nuc/nuc-kit-d54250wyk.html But, I'm looking for one that has the newest Intel processor (Atleast "Haswell"). -- /kashyap _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Dec 2 19:06:14 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 02 Dec 2014 14:06:14 -0500 Subject: [Rdo-list] [Rdo-newsletter] RDO Community Newsletter, December 2014 Message-ID: <547E0DA6.70008@redhat.com> December RDO Community Newsletter Thanks again for being part of the RDO community. As we head into what is typically a very quiet time of the year for Open Source projects, I want to wish you all a happy holiday season - for those who celebrate Christmas, Hanukkah, Yule, or any other Winter festival. Quick links: * Quick Start - http://openstack.redhat.com/quickstart * Mailing Lists - https://openstack.redhat.com/Mailing_lists * RDO packages - https://repos.fedorapeople.org/repos/openstack/openstack-juno/ * RDO blog - http://rdoproject.org/blog * Q&A - http://ask.openstack.org/ Holiday Slowdown and Stats This is, as I said above, traditionally a slow time of year. If you take a look at our community statistics page at https://openstack.redhat.com/stats/ you can see things start to slow down for the Christmas break, with a large number of people taking time off around the US Thanksgiving holiday, and even more time towards the end of the year. The community stats page is produced by Bitergia, who are the same people who produce the upstream OpenStack quarterly reports at http://bitergia.com/openstack-releases-reports/ but customized to our community, including activity on RDO mailing lists and traffic on http://ask.openstack.org/ Hangouts We are right now in the process of scheduling Hangouts for the coming months. I don't have anything firm to report right now - watch rdo-list for more news soon. If you'd like to talk about what your company is doing with RDO and OpenStack, please let me know (rbowen at redhat.com) so that I can get you on the schedule. Meanwhile, you can watch all of our past hangouts at https://openstack.redhat.com/Hangouts#Past_Hangouts Upcoming Events Your next opportunity to rub elbows with a number of the RDO OpenStack Engineering crowd is at FOSDEM - http://fosdem.org/ - 31 January & 1 February 2015 in Brussels, Belgium. Although the schedue isn't yet posted, you can be sure to hear more about this event in the coming weeks, as the various calls for papers close and the schedules are finalized. FOSDEM is the largest Open Source gathering in Europe, and always a highlight of the year when it comes to gathering with colleagues from varied software disciplines. Following FOSDEM, don't miss Config Management Camp in nearby Ghent - http://cfgmgmtcamp.eu/ - where you can learn about configuration management tools like Puppet, Chef, Ansible, Juju, and many others. Blogs The weeks following OpenStack Summit in Paris have been very fruitful in terms of blog posts. You can read some of the posts from members of the RDO community on the RDO blog at http://openstack.redhat.com/blog/ You should also watch the OpenStack Planet for a broader view of what's going on in the OpenStack community at large, at http://planet.openstack.org/ Questions and Answers Probably the best place to get answers about anything related to OpenStack, other than showing up at the OpenStack Summit, is http://ask.openstack.org/, where there's not only a huge number of experts waiting to answer your questions, but also an archive of questions that have already been asked and answered, which you can search to see if your scenario has ever come up before. If you're an expert yourself, please make a New Year's resolution to spend a few minutes every week answering a question or two, and contributing your knowledge back to the larger community. In Closing ... As always, I'll close with an encouragement to take a moment to connect with the RDO community via one of our many forums: * Follow us on Twitter - http://twitter.com/rdocommunity * Google+ - http://goo.gl/BOl85m * Facebook - http://facebook.com/rdocommunity * rdo-list mailing list - http://www.redhat.com/mailman/listinfo/rdo-list * This newsletter - http://www.redhat.com/mailman/listinfo/rdo-newsletter * RDO Q&A - http://ask.openstack.org/ * IRC - #rdo on Freenode.irc.net Thanks again for being part of the RDO community! -- Rich Bowen, OpenStack Community Liaison rbowen at redhat.com http://openstack.redhat.com _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From augol at redhat.com Wed Dec 3 06:26:46 2014 From: augol at redhat.com (Amit Ugol) Date: Wed, 3 Dec 2014 08:26:46 +0200 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <547DDD17.8060302@redhat.com> References: <547DDD17.8060302@redhat.com> Message-ID: <20141203062645.GA1528@augol-pc.tlv.redhat.com> Hi, I cannot recommend a NUC for this usage for a number of reasons: 1. No storage, internal support of 2.5" or external drives only which means that you have to choose between a hotter NUC (sits right on the CPU) or a slower OS. 2. Small fanless form factor means a very hot PC and the CPU and memory in use are the Low Power models. The CPU in particular costs ~100$ more in comparison with a none LP CPU with the same speed (also some LP CPU models come with no virtualization support). 3. No expansion slots. Its impossible to later add a 2nd physical NIC. As for the DELL optiflex, those are amazing but the models with i5 and 8GB RAM start at ~700$. For myself, I have recently built an almost inaudible PC, with an i5 and 8GB RAM after spending a lot of time in silentpcreview.com, if you want to I can share my build. Changing my SSD with a 1TB HDD and no GPU its about 500$. On Tue, Dec 02, 2014 at 10:39:03AM -0500, Dave Neary wrote: > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > NICs is reasonable to ask for. My desired price point is *low* - all 3 > for under $1000 would be ideal, failing that, as close to it as possible. > > Anyone have recommendations for hardware that would serve this process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list ---end quoted text--- -- Best Regards, Amit. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: not available URL: From Saurabh.Talwar at hgst.com Wed Dec 3 09:06:54 2014 From: Saurabh.Talwar at hgst.com (Saurabh Talwar) Date: Wed, 3 Dec 2014 09:06:54 +0000 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? Message-ID: Hi Guys, Does Icehouse work with Centos 6.6 since I am getting the following errors when I do packstack install. I have tried Mysql version 5.6.19 as well as 5.6.22 and 5.6.21. I keep on getting the same error. I am able to log into Mysql and the service is up and running when I do the packstack install. Steps: Add RDO repository # yum install -y https://rdo.fedorapeople.org/rdo-release.rpm Install packstack (puppet based) installer # yum install -y openstack-packstack Selectively install OpenStack components on tm04 [root at tm04 /]# packstack --install-hosts=172.16.73.113 --nagios-install=n --os-ceilometer-install=n --os-neutron-install=n --novanetwork-pubif=em3 --novanetwork-privif=em1 --novacompute-privif=em3 --keystone-admin-passwd=0011231 --keystone-demo-passwd=0011231 --ssh-public-key=/root/.ssh/id_rsa.pub ............................. Copying Puppet modules and manifests [ DONE ] Applying 172.16.73.113_prescript.pp 172.16.73.113_prescript.pp: [ DONE ] Applying 172.16.73.113_amqp.pp Applying 172.16.73.113_mariadb.pp 172.16.73.113_amqp.pp: [ DONE ] 172.16.73.113_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 172.16.73.113_mariadb.pp Error: Invalid parameter root_password on Class[Mysql::Server] at /var/tmp/packstack/b067a8012ca74ba9a825f3dfa948a238/manifests/172.16.73.113_mariadb.pp:21 on node tm04.virident.info You will find full trace in log /var/tmp/packstack/20141201-145625-wjvcdP/manifests/172.16.73.113_mariadb.pp.log Please check log file /var/tmp/packstack/20141201-145625-wjvcdP/openstack-setup.log for more information Any Help would be greatly appreciated. Thanks Sunny -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelangel at ajo.es Wed Dec 3 09:44:10 2014 From: miguelangel at ajo.es (Miguel Angel) Date: Wed, 3 Dec 2014 10:44:10 +0100 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: References: Message-ID: It may work, AFAIK, Can you provide the log dump in /var/tmp/packstack/20141201-145625-wjvcdP/manifests/172.16.73.113_mariadb.pp.log ? --- irc: ajo / mangelajo Miguel Angel Ajo Pelayo +34 636 52 25 69 skype: ajoajoajo On Wed, Dec 3, 2014 at 10:06 AM, Saurabh Talwar wrote: > Hi Guys, > > Does Icehouse work with Centos 6.6 since I am getting the following errors > when I do packstack install. > > I have tried Mysql version 5.6.19 as well as 5.6.22 and 5.6.21. I keep on > getting the same error. > > I am able to log into Mysql and the service is up and running when I do > the packstack install. > > *Steps:* > > *Add RDO repository* > # yum install -y https://rdo.fedorapeople.org/rdo-release.rpm > > *Install packstack (puppet based) installer* > # yum install -y openstack-packstack > > *Selectively install OpenStack components on **tm04* > *[root at tm04 /]#* packstack --install-hosts=172.16.73.113 > --nagios-install=n --os-ceilometer-install=n --os-neutron-install=n > --novanetwork-pubif=em3 --novanetwork-privif=em1 --novacompute-privif=em3 > --keystone-admin-passwd=0011231 --keystone-demo-passwd=0011231 > --ssh-public-key=/root/.ssh/id_rsa.pub > ?????????.. > Copying Puppet modules and manifests [ DONE ] > Applying 172.16.73.113_prescript.pp > 172.16.73.113_prescript.pp: [ DONE ] > Applying 172.16.73.113_amqp.pp > Applying 172.16.73.113_mariadb.pp > 172.16.73.113_amqp.pp: [ DONE ] > 172.16.73.113_mariadb.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 172.16.73.113_mariadb.pp > Error: Invalid parameter root_password on Class[Mysql::Server] at > /var/tmp/packstack/b067a8012ca74ba9a825f3dfa948a238/manifests/172.16.73.113_mariadb.pp:21 > on node tm04.virident.info > You will find full trace in log > /var/tmp/packstack/20141201-145625-wjvcdP/manifests/172.16.73.113_mariadb.pp.log > Please check log file > /var/tmp/packstack/20141201-145625-wjvcdP/openstack-setup.log for more > information > > > Any Help would be greatly appreciated. > > Thanks > Sunny > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Saurabh.Talwar at hgst.com Wed Dec 3 10:00:48 2014 From: Saurabh.Talwar at hgst.com (Saurabh Talwar) Date: Wed, 3 Dec 2014 10:00:48 +0000 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: References: Message-ID: Thanks for your response Miguel! Actually I reformatted the hard drive so I lost my log file. Really sorry about that. Now I have a different problem. Dependency issues which are preventing me to install openstack-packstack. [root at tm04 mysql]# yum install openstack-packstack Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Loading mirror speeds from cached hostfile * base: repos.lax.quadranet.com * extras: mirror.keystealth.org * updates: mirror.anl.gov Resolving Dependencies --> Running transaction check ---> Package openstack-packstack.noarch 0:2014.1.1-0.30.dev1258.el6 will be installed --> Processing Dependency: openstack-packstack-puppet = 2014.1.1-0.30.dev1258.el6 for package: openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch --> Processing Dependency: openstack-puppet-modules for package: openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch --> Running transaction check ---> Package openstack-packstack-puppet.noarch 0:2014.1.1-0.30.dev1258.el6 will be installed ---> Package openstack-puppet-modules.noarch 0:2014.1-25.el6 will be installed --> Processing Dependency: rubygem-json for package: openstack-puppet-modules-2014.1-25.el6.noarch --> Finished Dependency Resolution Error: Package: openstack-puppet-modules-2014.1-25.el6.noarch (openstactack-icehouse) Requires: rubygem-json You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [root at tm04 mysql]# cat /etc/redhat-release CentOS release 6.6 (Final) Thanks Sunny From: Miguel Angel [mailto:miguelangel at ajo.es] Sent: Wednesday, December 03, 2014 1:44 AM To: Saurabh Talwar Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Does Icehouse work with Centos 6.6? It may work, AFAIK, Can you provide the log dump in /var/tmp/packstack/20141201-145625-wjvcdP/manifests/172.16.73.113_mariadb.pp.log ? --- irc: ajo / mangelajo Miguel Angel Ajo Pelayo +34 636 52 25 69 skype: ajoajoajo On Wed, Dec 3, 2014 at 10:06 AM, Saurabh Talwar > wrote: Hi Guys, Does Icehouse work with Centos 6.6 since I am getting the following errors when I do packstack install. I have tried Mysql version 5.6.19 as well as 5.6.22 and 5.6.21. I keep on getting the same error. I am able to log into Mysql and the service is up and running when I do the packstack install. Steps: Add RDO repository # yum install -y https://rdo.fedorapeople.org/rdo-release.rpm Install packstack (puppet based) installer # yum install -y openstack-packstack Selectively install OpenStack components on tm04 [root at tm04 /]# packstack --install-hosts=172.16.73.113 --nagios-install=n --os-ceilometer-install=n --os-neutron-install=n --novanetwork-pubif=em3 --novanetwork-privif=em1 --novacompute-privif=em3 --keystone-admin-passwd=0011231 --keystone-demo-passwd=0011231 --ssh-public-key=/root/.ssh/id_rsa.pub ?????????.. Copying Puppet modules and manifests [ DONE ] Applying 172.16.73.113_prescript.pp 172.16.73.113_prescript.pp: [ DONE ] Applying 172.16.73.113_amqp.pp Applying 172.16.73.113_mariadb.pp 172.16.73.113_amqp.pp: [ DONE ] 172.16.73.113_mariadb.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 172.16.73.113_mariadb.pp Error: Invalid parameter root_password on Class[Mysql::Server] at /var/tmp/packstack/b067a8012ca74ba9a825f3dfa948a238/manifests/172.16.73.113_mariadb.pp:21 on node tm04.virident.info You will find full trace in log /var/tmp/packstack/20141201-145625-wjvcdP/manifests/172.16.73.113_mariadb.pp.log Please check log file /var/tmp/packstack/20141201-145625-wjvcdP/openstack-setup.log for more information Any Help would be greatly appreciated. Thanks Sunny _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Wed Dec 3 15:48:59 2014 From: gchamoul at redhat.com (=?iso-8859-1?Q?Ga=EBl?= Chamoulaud) Date: Wed, 3 Dec 2014 16:48:59 +0100 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: References: Message-ID: <20141203154859.GA26681@strider.cdg.redhat.com> On 03/Dec/2014 @ 10:00, Saurabh Talwar wrote: > Thanks for your response Miguel! Actually I reformatted the hard drive so I > lost my log file. Really sorry about that. > > > > Now I have a different problem. Dependency issues which are preventing me to > install openstack-packstack. > > > > [root at tm04 mysql]# yum install openstack-packstack > > Loaded plugins: fastestmirror, refresh-packagekit, security > > Setting up Install Process > > Loading mirror speeds from cached hostfile > > * base: repos.lax.quadranet.com > > * extras: mirror.keystealth.org > > * updates: mirror.anl.gov > > Resolving Dependencies > > --> Running transaction check > > ---> Package openstack-packstack.noarch 0:2014.1.1-0.30.dev1258.el6 will be > installed > > --> Processing Dependency: openstack-packstack-puppet = > 2014.1.1-0.30.dev1258.el6 for package: > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > --> Processing Dependency: openstack-puppet-modules for package: > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > --> Running transaction check > > ---> Package openstack-packstack-puppet.noarch 0:2014.1.1-0.30.dev1258.el6 will > be installed > > ---> Package openstack-puppet-modules.noarch 0:2014.1-25.el6 will be installed > > --> Processing Dependency: rubygem-json for package: > openstack-puppet-modules-2014.1-25.el6.noarch > > --> Finished Dependency Resolution > > Error: Package: openstack-puppet-modules-2014.1-25.el6.noarch > (openstactack-icehouse) > > Requires: rubygem-json > > You could try using --skip-broken to work around the problem > > You could try running: rpm -Va --nofiles --nodigest > > [root at tm04 mysql]# cat /etc/redhat-release > > CentOS release 6.6 (Final) > Hi Saurabh, I think you forgot to reinstall rdo-icehouse rpm on your new machine ;-) $> yum -y install http://goo.gl/Y0VxSq After this, it should be better ! Best Regards, ~GC -- Ga?l Chamoulaud Openstack Engineering Mail: [gchamoul|gael] at redhat dot com IRC: strider/gchamoul (Red Hat), gchamoul (Freenode) GnuPG Key ID: 7F4B301 C75F 15C2 A7FD EBC3 7B2D CE41 0077 6A4B A7F4 B301 Freedom...Courage...Commitment...Accountability -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From gchamoul at redhat.com Wed Dec 3 16:00:54 2014 From: gchamoul at redhat.com (=?iso-8859-1?Q?Ga=EBl?= Chamoulaud) Date: Wed, 3 Dec 2014 17:00:54 +0100 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: <20141203154859.GA26681@strider.cdg.redhat.com> References: <20141203154859.GA26681@strider.cdg.redhat.com> Message-ID: <20141203160054.GB26681@strider.cdg.redhat.com> On 03/Dec/2014 @ 16:48, Ga?l Chamoulaud wrote: > On 03/Dec/2014 @ 10:00, Saurabh Talwar wrote: > > Thanks for your response Miguel! Actually I reformatted the hard drive so I > > lost my log file. Really sorry about that. > > > > > > > > Now I have a different problem. Dependency issues which are preventing me to > > install openstack-packstack. > > > > > > > > [root at tm04 mysql]# yum install openstack-packstack > > > > Loaded plugins: fastestmirror, refresh-packagekit, security > > > > Setting up Install Process > > > > Loading mirror speeds from cached hostfile > > > > * base: repos.lax.quadranet.com > > > > * extras: mirror.keystealth.org > > > > * updates: mirror.anl.gov > > > > Resolving Dependencies > > > > --> Running transaction check > > > > ---> Package openstack-packstack.noarch 0:2014.1.1-0.30.dev1258.el6 will be > > installed > > > > --> Processing Dependency: openstack-packstack-puppet = > > 2014.1.1-0.30.dev1258.el6 for package: > > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > > > --> Processing Dependency: openstack-puppet-modules for package: > > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > > > --> Running transaction check > > > > ---> Package openstack-packstack-puppet.noarch 0:2014.1.1-0.30.dev1258.el6 will > > be installed > > > > ---> Package openstack-puppet-modules.noarch 0:2014.1-25.el6 will be installed > > > > --> Processing Dependency: rubygem-json for package: > > openstack-puppet-modules-2014.1-25.el6.noarch > > > > --> Finished Dependency Resolution > > > > Error: Package: openstack-puppet-modules-2014.1-25.el6.noarch > > (openstactack-icehouse) > > > > Requires: rubygem-json > > > > You could try using --skip-broken to work around the problem > > > > You could try running: rpm -Va --nofiles --nodigest > > > > [root at tm04 mysql]# cat /etc/redhat-release > > > > CentOS release 6.6 (Final) > > > > Hi Saurabh, > > I think you forgot to reinstall rdo-icehouse rpm on your new machine ;-) > > $> yum -y install http://goo.gl/Y0VxSq > > After this, it should be better ! > Or if you really reinstalled rdo-icehouse rpm, try to rebuild the yum cache ? >$ yum clean all && yum makecache -- Ga?l Chamoulaud Openstack Engineering Mail: [gchamoul|gael] at redhat dot com IRC: strider/gchamoul (Red Hat), gchamoul (Freenode) GnuPG Key ID: 7F4B301 C75F 15C2 A7FD EBC3 7B2D CE41 0077 6A4B A7F4 B301 Freedom...Courage...Commitment...Accountability -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From gchamoul at redhat.com Wed Dec 3 16:10:06 2014 From: gchamoul at redhat.com (=?iso-8859-1?Q?Ga=EBl?= Chamoulaud) Date: Wed, 3 Dec 2014 17:10:06 +0100 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: <20141203160054.GB26681@strider.cdg.redhat.com> References: <20141203154859.GA26681@strider.cdg.redhat.com> <20141203160054.GB26681@strider.cdg.redhat.com> Message-ID: <20141203161006.GC26681@strider.cdg.redhat.com> On 03/Dec/2014 @ 17:00, Ga?l Chamoulaud wrote: > On 03/Dec/2014 @ 16:48, Ga?l Chamoulaud wrote: > > On 03/Dec/2014 @ 10:00, Saurabh Talwar wrote: > > > Thanks for your response Miguel! Actually I reformatted the hard drive so I > > > lost my log file. Really sorry about that. > > > > > > > > > > > > Now I have a different problem. Dependency issues which are preventing me to > > > install openstack-packstack. > > > > > > > > > > > > [root at tm04 mysql]# yum install openstack-packstack > > > > > > Loaded plugins: fastestmirror, refresh-packagekit, security > > > > > > Setting up Install Process > > > > > > Loading mirror speeds from cached hostfile > > > > > > * base: repos.lax.quadranet.com > > > > > > * extras: mirror.keystealth.org > > > > > > * updates: mirror.anl.gov > > > > > > Resolving Dependencies > > > > > > --> Running transaction check > > > > > > ---> Package openstack-packstack.noarch 0:2014.1.1-0.30.dev1258.el6 will be > > > installed > > > > > > --> Processing Dependency: openstack-packstack-puppet = > > > 2014.1.1-0.30.dev1258.el6 for package: > > > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > > > > > --> Processing Dependency: openstack-puppet-modules for package: > > > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > > > > > --> Running transaction check > > > > > > ---> Package openstack-packstack-puppet.noarch 0:2014.1.1-0.30.dev1258.el6 will > > > be installed > > > > > > ---> Package openstack-puppet-modules.noarch 0:2014.1-25.el6 will be installed > > > > > > --> Processing Dependency: rubygem-json for package: > > > openstack-puppet-modules-2014.1-25.el6.noarch > > > > > > --> Finished Dependency Resolution > > > > > > Error: Package: openstack-puppet-modules-2014.1-25.el6.noarch > > > (openstactack-icehouse) > > > > > > Requires: rubygem-json > > > > > > You could try using --skip-broken to work around the problem > > > > > > You could try running: rpm -Va --nofiles --nodigest > > > > > > [root at tm04 mysql]# cat /etc/redhat-release > > > > > > CentOS release 6.6 (Final) > > > > > > > Hi Saurabh, > > > > I think you forgot to reinstall rdo-icehouse rpm on your new machine ;-) > > > > $> yum -y install http://goo.gl/Y0VxSq > > > > After this, it should be better ! > > > > Or if you really reinstalled rdo-icehouse rpm, try to rebuild the yum cache ? > > >$ yum clean all && yum makecache > BTW, this link https://rdo.fedorapeople.org/rdo-release.rpm is pointing to the last RDO release, Juno-1 and this version is not supported on RHEL6.x/CentOS6.x. So it would be better to install the rdo icehouse rpm release you can find here [1]: [1] - https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm -- Ga?l Chamoulaud Openstack Engineering Mail: [gchamoul|gael] at redhat dot com IRC: strider/gchamoul (Red Hat), gchamoul (Freenode) GnuPG Key ID: 7F4B301 C75F 15C2 A7FD EBC3 7B2D CE41 0077 6A4B A7F4 B301 Freedom...Courage...Commitment...Accountability -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From rbowen at redhat.com Wed Dec 3 16:10:18 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 03 Dec 2014 11:10:18 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <20141203062645.GA1528@augol-pc.tlv.redhat.com> References: <547DDD17.8060302@redhat.com> <20141203062645.GA1528@augol-pc.tlv.redhat.com> Message-ID: <547F35EA.7070502@redhat.com> On 12/03/2014 01:26 AM, Amit Ugol wrote: > For myself, I have recently built an almost inaudible PC, with an i5 and 8GB > RAM after spending a lot of time in silentpcreview.com, if you want to I can > share my build. Changing my SSD with a 1TB HDD and no GPU its about 500$. Yes please. I'd very much like to see what you ended up with. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From kfiresmith at gmail.com Wed Dec 3 16:17:27 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 3 Dec 2014 11:17:27 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: References: <547DDD17.8060302@redhat.com> Message-ID: Here's a cart I put together today for $244 that gives 4 cores, 8GB RAM (add $65 for 16GB RAM), 1TB of slow spinning platters (I paid and extra $5 to get 64MB cache over 32MB, but would be curious to see how well the hybrid drives work as a compromise). Couple that PC with the HP NC380T 2xGbe cards for $20 each and you've got a decent SOHO openstack node for about $275 with shipping. *Qty.**Product Description**Savings**Total Price*1 [image: APEX TX-373 Black Steel MicroATX Mid Tower Computer Case 300W Power Supply] APEX TX-373 Black Computer Case Item #:N82E16811154079 Return Policy: Standard Return Policy $39.99 [image: Add APEX TX-373 Black Computer Case to cart] 1 [image: AMD Athlon 5350 Kabini Quad-Core 2.05GHz Socket AM1 25W Desktop Processor AMD Radeon HD 8400 AD5350JAHMBOX] AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor Item #:N82E16819113364 Return Policy: CPU Replacement Only Return Policy $59.99 [image: Add AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor to cart] 1 [image: MSI AM1I AM1 SATA 6Gb/s USB 3.0 HDMI Mini ITX AMD Motherboard] MSI AM1I Mini ITX AMD Motherboard Item #:N82E16813130759 Return Policy: Standard Return Policy $34.99 $29.99 [image: Add MSI AM1I Mini ITX AMD Motherboard to cart] 1 [image: Seagate Barracuda ST1000DM003 1TB 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Bare Drive] Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Item #:N82E16822148840 Return Policy: Iron Egg Guarantee Return Policy $69.99 $49.99 [image: Add Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive to cart] 1 [image: Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop MemoryModel TED38G1600C1101] Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Item #:N82E16820313472 Return Policy: Memory Standard Return Policy $64.99 [image: Add Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory to cart] *Grand Total:*$244.95 If anyone has ideas to get better capabilities for a similar price I'm all ears. (Options might be to look for refurb/liquidated DDR3 PC-1600 DIMMs to shave another ~$20 off each unit) - Kodiak On Tue, Dec 2, 2014 at 10:44 AM, Kodiak Firesmith wrote: > I recently did this using micro-ATX to save money over they shuttle / ITX > form factor. I built around the cheapest low-wattage AMD quad-core APU I > could find on Newegg with 8GB RAM on each box and slow 1tb spinning > platters. I think I got down to about $340 / each for all components > including PCI-e 2x1GB NICs I got second hand from a parts liquidator. If > you are interested in specifics I can try to compile them after work. > > - Kodiak > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary wrote: > >> Hi, >> >> I'm looking for ideas of mini PCs I can use for a small RDO cloud - >> looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB >> total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 >> NICs is reasonable to ask for. My desired price point is *low* - all 3 >> for under $1000 would be ideal, failing that, as close to it as possible. >> >> Anyone have recommendations for hardware that would serve this process? >> >> Thanks, >> Dave. >> >> -- >> Dave Neary - NFV/SDN Community Strategy >> Open Source and Standards, Red Hat - http://community.redhat.com >> Ph: +1-978-399-2182 / Cell: +1-978-799-3338 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Wed Dec 3 16:21:52 2014 From: dneary at redhat.com (Dave Neary) Date: Wed, 03 Dec 2014 11:21:52 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: References: <547DDD17.8060302@redhat.com> Message-ID: <547F38A0.7060202@redhat.com> Wow - this is awesome! Thank you Kodiak. Dave. On 12/03/2014 11:17 AM, Kodiak Firesmith wrote: > Here's a cart I put together today for $244 that gives 4 cores, 8GB RAM > (add $65 for 16GB RAM), 1TB of slow spinning platters (I paid and extra > $5 to get 64MB cache over 32MB, but would be curious to see how well the > hybrid drives work as a compromise). > > Couple that PC with the HP NC380T 2xGbe cards for $20 each and you've > got a decent SOHO openstack node for about $275 with shipping. > > *Qty.* *Product Description* *Savings* *Total Price* > 1 > > APEX TX-373 Black Steel MicroATX Mid Tower Computer Case 300W Power Supply > > > > APEX TX-373 Black Computer Case > > Item #:N82E16811154079 > Return Policy: Standard Return Policy > > > $39.99 > > Add APEX TX-373 Black Computer Case to cart > > 1 > > AMD Athlon 5350 Kabini Quad-Core 2.05GHz Socket AM1 25W Desktop > Processor AMD Radeon HD 8400 AD5350JAHMBOX > > > > AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor > > Item #:N82E16819113364 > Return Policy: CPU Replacement Only Return Policy > > > $59.99 > > Add AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor to cart > > 1 > > MSI AM1I AM1 SATA 6Gb/s USB 3.0 HDMI Mini ITX AMD Motherboard > > > > MSI AM1I Mini ITX AMD Motherboard > > Item #:N82E16813130759 > Return Policy: Standard Return Policy > > > $34.99 > $29.99 > > Add MSI AM1I Mini ITX AMD Motherboard to cart > > 1 > > Seagate Barracuda ST1000DM003 1TB 64MB Cache SATA 6.0Gb/s 3.5" Internal > Hard Drive Bare Drive > > > > Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" > Internal Hard Drive > > Item #:N82E16822148840 > Return Policy: Iron Egg Guarantee Return Policy > > > $69.99 > $49.99 > > Add Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s > 3.5" Internal Hard Drive to cart > > 1 > > Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop > MemoryModel TED38G1600C1101 > > > > Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory > > Item #:N82E16820313472 > Return Policy: Memory Standard Return Policy > > > $64.99 > > Add Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop > Memory to cart > > *Grand Total:* $244.95 > > > If anyone has ideas to get better capabilities for a similar price I'm > all ears. > (Options might be to look for refurb/liquidated DDR3 PC-1600 DIMMs to > shave another ~$20 off each unit) > > > - Kodiak > > > On Tue, Dec 2, 2014 at 10:44 AM, Kodiak Firesmith > wrote: > > I recently did this using micro-ATX to save money over they shuttle > / ITX form factor. I built around the cheapest low-wattage AMD > quad-core APU I could find on Newegg with 8GB RAM on each box and > slow 1tb spinning platters. I think I got down to about $340 / each > for all components including PCI-e 2x1GB NICs I got second hand from > a parts liquidator. If you are interested in specifics I can try to > compile them after work. > > - Kodiak > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary > wrote: > > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering > if 2 > NICs is reasonable to ask for. My desired price point is *low* - > all 3 > for under $1000 would be ideal, failing that, as close to it as > possible. > > Anyone have recommendations for hardware that would serve this > process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: > +1-978-799-3338 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From James.Radtke at siriusxm.com Wed Dec 3 16:27:58 2014 From: James.Radtke at siriusxm.com (Radtke, James) Date: Wed, 3 Dec 2014 16:27:58 +0000 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: References: <547DDD17.8060302@redhat.com> , Message-ID: <0D9F522988C72B48AD7045FCC7C2F3FE26E427A9@PDGLMPEXCMBX01.corp.siriusxm.com> Perhaps this would make for a good Wiki page? List a few "labs" (without specifics regarding a retailer) and some of the advantages/gotcha's of each lab setup. I had wished that some of the Quick-Start guides were able to provide this type of guidance (but I completely understand why the don't, or shouldn't) but.. if you have a fairly detailed idea of exactly what you are about to commit to, I think it makes it easier. ________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Kodiak Firesmith [kfiresmith at gmail.com] Sent: Wednesday, December 03, 2014 11:17 AM To: Dave Neary Cc: rdo-list Subject: Re: [Rdo-list] Cheap, quiet hardware for a small RDO installation? Here's a cart I put together today for $244 that gives 4 cores, 8GB RAM (add $65 for 16GB RAM), 1TB of slow spinning platters (I paid and extra $5 to get 64MB cache over 32MB, but would be curious to see how well the hybrid drives work as a compromise). Couple that PC with the HP NC380T 2xGbe cards for $20 each and you've got a decent SOHO openstack node for about $275 with shipping. Qty. Product Description Savings Total Price 1 [APEX TX-373 Black Steel MicroATX Mid Tower Computer Case 300W Power Supply] APEX TX-373 Black Computer Case Item #:N82E16811154079 Return Policy: Standard Return Policy $39.99 [Add APEX TX-373 Black Computer Case to cart] 1 [AMD Athlon 5350 Kabini Quad-Core 2.05GHz Socket AM1 25W Desktop Processor AMD Radeon HD 8400 AD5350JAHMBOX] AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor Item #:N82E16819113364 Return Policy: CPU Replacement Only Return Policy $59.99 [Add AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor to cart] 1 [MSI AM1I AM1 SATA 6Gb/s USB 3.0 HDMI Mini ITX AMD Motherboard] MSI AM1I Mini ITX AMD Motherboard Item #:N82E16813130759 Return Policy: Standard Return Policy $34.99 $29.99 [Add MSI AM1I Mini ITX AMD Motherboard to cart] 1 [Seagate Barracuda ST1000DM003 1TB 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Bare Drive] Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Item #:N82E16822148840 Return Policy: Iron Egg Guarantee Return Policy $69.99 $49.99 [Add Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive to cart] 1 [Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop MemoryModel TED38G1600C1101] Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Item #:N82E16820313472 Return Policy: Memory Standard Return Policy $64.99 [Add Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory to cart] Grand Total: $244.95 If anyone has ideas to get better capabilities for a similar price I'm all ears. (Options might be to look for refurb/liquidated DDR3 PC-1600 DIMMs to shave another ~$20 off each unit) - Kodiak On Tue, Dec 2, 2014 at 10:44 AM, Kodiak Firesmith > wrote: I recently did this using micro-ATX to save money over they shuttle / ITX form factor. I built around the cheapest low-wattage AMD quad-core APU I could find on Newegg with 8GB RAM on each box and slow 1tb spinning platters. I think I got down to about $340 / each for all components including PCI-e 2x1GB NICs I got second hand from a parts liquidator. If you are interested in specifics I can try to compile them after work. - Kodiak On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary > wrote: Hi, I'm looking for ideas of mini PCs I can use for a small RDO cloud - looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 NICs is reasonable to ask for. My desired price point is *low* - all 3 for under $1000 would be ideal, failing that, as close to it as possible. Anyone have recommendations for hardware that would serve this process? Thanks, Dave. -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom at buskey.name Wed Dec 3 18:31:30 2014 From: tom at buskey.name (Tom Buskey) Date: Wed, 3 Dec 2014 13:31:30 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <547DDD17.8060302@redhat.com> References: <547DDD17.8060302@redhat.com> Message-ID: My intro to Openstack was a talk by someone using 12 HP micro servers in his office. - They are quiet enough. - Have an internal USB port to put the OS boot on so all 4 disk bays could be storage - Have IPMI (iLO/iDRAC) so remote power toggle can work = He got an addressable PDU bcause the IPMI wasn't reliable enough - Have a PCIe x1 for a gigabit ethernet card = there are 1 and 2 port gigabit ethernet cards that work for < $50 - RAM could be upped to 8 GB (later systems can do more) He added 1 or 2 network switches and was able to power & run it in his office. I think one node is a provisioning server so everything could be PXE booted. If you don't have that, I'd want an IPMI that can use virtual storage and remote console. Supermicro IPMI comes with it (and I've found the power toggle to be reliable enough). iDRAC is a license. Desktops typically do not have IPMI. I think you'd want at least 2 network ports for your nodes and 3+ on the controller. The limiting factor will be RAM, not cores. I'll take a 32 GB quad core system over a 12 core 16GB system. FWIW - does anyone know of a < $500 system (motherboard, cpu, power, case) that can go to 64 GB? On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary wrote: > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 > NICs is reasonable to ask for. My desired price point is *low* - all 3 > for under $1000 would be ideal, failing that, as close to it as possible. > > Anyone have recommendations for hardware that would serve this process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kfiresmith at gmail.com Wed Dec 3 18:47:44 2014 From: kfiresmith at gmail.com (Kodiak Firesmith) Date: Wed, 3 Dec 2014 13:47:44 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: References: <547DDD17.8060302@redhat.com> Message-ID: > FWIW - does anyone know of a < $500 system (motherboard, cpu, power, case) that can go to 64 GB? *Qty.**Product Description**Savings**Total Price*1 [image: HEC Vigilance400 Black 0.5 mm Thickness SGCC MicroATX Mini Tower Computer Case with Dual 8cm Fan, 2x USB2.0, Audio HP400 400W Power Supply] HEC Vigilance400 Black Computer Case with Dual 8cm Fan, 2x USB2.0, Audio Item #:N82E16811121125 Return Policy: Iron Egg Guarantee Return Policy $44.99 [image: Add HEC Vigilance400 Black Computer Case with Dual 8cm Fan, 2x USB2.0, Audio to cart] 1 [image: AMD A8-5600K Trinity Quad-Core 3.6GHz (3.9GHz Turbo) Socket FM2 100W Desktop APU (CPU + GPU) with DirectX 11 Graphic AMD Radeon HD 7560D AD560KWOHJBOX] AMD A8-5600K 3.6GHz (3.9GHz Turbo) Socket FM2 Desktop APU (CPU + GPU) with DirectX 11 Graphic Item #:N82E16819113281 Return Policy: Iron Egg Guarantee Replacement-Only Return Policy $89.99 [image: Add AMD A8-5600K 3.6GHz (3.9GHz Turbo) Socket FM2 Desktop APU (CPU + GPU) with DirectX 11 Graphic to cart] 1 [image: GIGABYTE GA-F2A58M-DS2 FM2+ / FM2 AMD A58 (Bolton D2) Micro ATX AMD Motherboard] GIGABYTE GA-F2A58M-DS2 Micro ATX AMD Motherboard Item #:N82E16813128736 Return Policy: Iron Egg Guarantee Return Policy $58.99 $48.99 [image: Add GIGABYTE GA-F2A58M-DS2 Micro ATX AMD Motherboard to cart] 1 [image: AMD Reward Gift ? Online Game Code] AMD Reward Gift ? Online Game Code Item #:N82E16800984009 Return Policy: Standard Return Policy $9.99 [image: Add AMD Reward Gift ? Online Game Code to cart] *Grand Total:*$183.97 ( Micr ATX AMD APU based systems are really cheap these days...) Next. On Wed, Dec 3, 2014 at 1:31 PM, Tom Buskey wrote: > My intro to Openstack was a talk by someone using 12 HP micro servers in > his office. > > - They are quiet enough. > - Have an internal USB port to put the OS boot on so all 4 disk bays could > be storage > - Have IPMI (iLO/iDRAC) so remote power toggle can work > = He got an addressable PDU bcause the IPMI wasn't reliable enough > - Have a PCIe x1 for a gigabit ethernet card > = there are 1 and 2 port gigabit ethernet cards that work for < $50 > - RAM could be upped to 8 GB (later systems can do more) > > He added 1 or 2 network switches and was able to power & run it in his > office. > > I think one node is a provisioning server so everything could be PXE > booted. If you don't have that, I'd want an IPMI that can use virtual > storage and remote console. Supermicro IPMI comes with it (and I've found > the power toggle to be reliable enough). iDRAC is a license. Desktops > typically do not have IPMI. > > I think you'd want at least 2 network ports for your nodes and 3+ on the > controller. > The limiting factor will be RAM, not cores. I'll take a 32 GB quad core > system over a 12 core 16GB system. > > FWIW - does anyone know of a < $500 system (motherboard, cpu, power, case) > that can go to 64 GB? > > > > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary wrote: > >> Hi, >> >> I'm looking for ideas of mini PCs I can use for a small RDO cloud - >> looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB >> total storage) and RAM (8GB per PC enough?). Also, I'm wondering if 2 >> NICs is reasonable to ask for. My desired price point is *low* - all 3 >> for under $1000 would be ideal, failing that, as close to it as possible. >> >> Anyone have recommendations for hardware that would serve this process? >> >> Thanks, >> Dave. >> >> -- >> Dave Neary - NFV/SDN Community Strategy >> Open Source and Standards, Red Hat - http://community.redhat.com >> Ph: +1-978-399-2182 / Cell: +1-978-799-3338 >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Wed Dec 3 23:14:33 2014 From: dneary at redhat.com (Dave Neary) Date: Wed, 03 Dec 2014 18:14:33 -0500 Subject: [Rdo-list] Cheap, quiet hardware for a small RDO installation? In-Reply-To: <0D9F522988C72B48AD7045FCC7C2F3FE26E427A9@PDGLMPEXCMBX01.corp.siriusxm.com> References: <547DDD17.8060302@redhat.com> , <0D9F522988C72B48AD7045FCC7C2F3FE26E427A9@PDGLMPEXCMBX01.corp.siriusxm.com> Message-ID: <547F9959.6020408@redhat.com> On 12/03/2014 11:27 AM, Radtke, James wrote: > Perhaps this would make for a good Wiki page? Totally! I took the liberty of creating one, and putting the hardware suggested above in there: https://openstack.redhat.com/Home_lab Can others add their home lab set-ups, if they have one, please? Thanks, Dave. > List a few "labs" (without specifics regarding a retailer) and some of > the advantages/gotcha's of each lab setup. > > I had wished that some of the Quick-Start guides were able to provide > this type of guidance (but I completely understand why the don't, or > shouldn't) but.. if you have a fairly detailed idea of exactly what you > are about to commit to, I think it makes it easier. > ------------------------------------------------------------------------ > *From:* rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on > behalf of Kodiak Firesmith [kfiresmith at gmail.com] > *Sent:* Wednesday, December 03, 2014 11:17 AM > *To:* Dave Neary > *Cc:* rdo-list > *Subject:* Re: [Rdo-list] Cheap, quiet hardware for a small RDO > installation? > > Here's a cart I put together today for $244 that gives 4 cores, 8GB RAM > (add $65 for 16GB RAM), 1TB of slow spinning platters (I paid and extra > $5 to get 64MB cache over 32MB, but would be curious to see how well the > hybrid drives work as a compromise). > > Couple that PC with the HP NC380T 2xGbe cards for $20 each and you've > got a decent SOHO openstack node for about $275 with shipping. > > *Qty.* *Product Description* *Savings* *Total Price* > 1 > > APEX TX-373 Black Steel MicroATX Mid Tower Computer Case 300W Power Supply > > > > APEX TX-373 Black Computer Case > > Item #:N82E16811154079 > Return Policy: Standard Return Policy > > > $39.99 > > Add APEX TX-373 Black Computer Case to cart > > > 1 > > AMD Athlon 5350 Kabini Quad-Core 2.05GHz Socket AM1 25W Desktop > Processor AMD Radeon HD 8400 AD5350JAHMBOX > > > > AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor > > Item #:N82E16819113364 > Return Policy: CPU Replacement Only Return Policy > > > $59.99 > > Add AMD Athlon 5350 2.05GHz Socket AM1 Desktop Processor to cart > > > 1 > > MSI AM1I AM1 SATA 6Gb/s USB 3.0 HDMI Mini ITX AMD Motherboard > > > > MSI AM1I Mini ITX AMD Motherboard > > Item #:N82E16813130759 > Return Policy: Standard Return Policy > > > $34.99 > $29.99 > > Add MSI AM1I Mini ITX AMD Motherboard to cart > > > 1 > > Seagate Barracuda ST1000DM003 1TB 64MB Cache SATA 6.0Gb/s 3.5" Internal > Hard Drive Bare Drive > > > > Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" > Internal Hard Drive > > Item #:N82E16822148840 > Return Policy: Iron Egg Guarantee Return Policy > > > $69.99 > $49.99 > > Add Seagate Barracuda ST1000DM003 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s > 3.5" Internal Hard Drive to cart > > > 1 > > Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop > MemoryModel TED38G1600C1101 > > > > Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory > > Item #:N82E16820313472 > Return Policy: Memory Standard Return Policy > > > $64.99 > > Add Team Elite 8GB 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop > Memory to cart > > > *Grand Total:* $244.95 > > > If anyone has ideas to get better capabilities for a similar price I'm > all ears. > (Options might be to look for refurb/liquidated DDR3 PC-1600 DIMMs to > shave another ~$20 off each unit) > > > - Kodiak > > > On Tue, Dec 2, 2014 at 10:44 AM, Kodiak Firesmith > wrote: > > I recently did this using micro-ATX to save money over they shuttle > / ITX form factor. I built around the cheapest low-wattage AMD > quad-core APU I could find on Newegg with 8GB RAM on each box and > slow 1tb spinning platters. I think I got down to about $340 / each > for all components including PCI-e 2x1GB NICs I got second hand from > a parts liquidator. If you are interested in specifics I can try to > compile them after work. > > - Kodiak > > On Tue, Dec 2, 2014 at 10:39 AM, Dave Neary > wrote: > > Hi, > > I'm looking for ideas of mini PCs I can use for a small RDO cloud - > looking at maybe 3 shuttle PCs, each with "enough" disk (maybe 800GB > total storage) and RAM (8GB per PC enough?). Also, I'm wondering > if 2 > NICs is reasonable to ask for. My desired price point is *low* - > all 3 > for under $1000 would be ideal, failing that, as close to it as > possible. > > Anyone have recommendations for hardware that would serve this > process? > > Thanks, > Dave. > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: > +1-978-799-3338 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From dneary at redhat.com Thu Dec 4 00:18:57 2014 From: dneary at redhat.com (Dave Neary) Date: Wed, 03 Dec 2014 19:18:57 -0500 Subject: [Rdo-list] RabbitMQ issue when starting Nova service on compute node Message-ID: <547FA871.9090902@redhat.com> Hi all, I hit an issue today when installing RDO on 3 nodes (VMs) - when I got to the point of starting the Nova service on the compute nodes, the install crapped out. Logs weren't helpful, but some webs earching uncovered this Ask question: https://ask.openstack.org/en/question/48329/openstack-juno-using-rdo-fails-installation-amqp-server-closed-the-connection/ It turns out that the default RabbitMQ config file does not allow remote connection of "guest/guest" user. https://www.rabbitmq.com/access-control.html I created a bug for the issue here (didn't find one before): https://bugzilla.redhat.com/show_bug.cgi?id=1170385 The fixes would be straightforward: use a non-guest AMQP user and password, or enable remote connection for the RabbitMQ guest user. But I can't figure out how to do either of those - I don't think that CONFIG_AMQP_AUTH_USER=amqp_user CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER in the answer file are what I'm looking for, I don't see any way to update the RabbitMQ config file in amqp.pp There is another optiuon: use qpid as the AMQP provider - but I wanted to do a default install if at all possible. Has anyone else hit this issue, and how did you get past it? Thanks, Dave. -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From apevec at gmail.com Thu Dec 4 14:12:26 2014 From: apevec at gmail.com (Alan Pevec) Date: Thu, 4 Dec 2014 15:12:26 +0100 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: <20141203161006.GC26681@strider.cdg.redhat.com> References: <20141203154859.GA26681@strider.cdg.redhat.com> <20141203160054.GB26681@strider.cdg.redhat.com> <20141203161006.GC26681@strider.cdg.redhat.com> Message-ID: > So it would be better to install the rdo icehouse rpm release you can find > here [1]: > > [1] - https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm There's also version independent redirect pointing to the latest version at rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm Cheers, Alan From rbowen at redhat.com Thu Dec 4 14:16:51 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 04 Dec 2014 09:16:51 -0500 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: References: <20141203154859.GA26681@strider.cdg.redhat.com> <20141203160054.GB26681@strider.cdg.redhat.com> <20141203161006.GC26681@strider.cdg.redhat.com> Message-ID: <54806CD3.90200@redhat.com> On 12/04/2014 09:12 AM, Alan Pevec wrote: >> So it would be better to install the rdo icehouse rpm release you can find >> here [1]: >> >> [1] - https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm > > There's also version independent redirect pointing to the latest > version at rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm I've called this out a little more obviously in the QuickStart now. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From kchamart at redhat.com Thu Dec 4 14:31:02 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 4 Dec 2014 15:31:02 +0100 Subject: [Rdo-list] RabbitMQ issue when starting Nova service on compute node In-Reply-To: <547FA871.9090902@redhat.com> References: <547FA871.9090902@redhat.com> Message-ID: <20141204143102.GG11899@tesla.redhat.com> On Wed, Dec 03, 2014 at 07:18:57PM -0500, Dave Neary wrote: > Hi all, > > I hit an issue today when installing RDO on 3 nodes (VMs) - when I got > to the point of starting the Nova service on the compute nodes, the > install crapped out. Logs weren't helpful, but some webs earching > uncovered this Ask question: > https://ask.openstack.org/en/question/48329/openstack-juno-using-rdo-fails-installation-amqp-server-closed-the-connection/ > > It turns out that the default RabbitMQ config file does not allow remote > connection of "guest/guest" user. > https://www.rabbitmq.com/access-control.html > > I created a bug for the issue here (didn't find one before): > https://bugzilla.redhat.com/show_bug.cgi?id=1170385 I'm not a RabbitMQ expert, but you might also want to add information like (from appropriate hosts) $ rabbitmqctl status to the bug. > The fixes would be straightforward: use a non-guest AMQP user and > password, or enable remote connection for the RabbitMQ guest user. But I > can't figure out how to do either of those - I don't think that > > CONFIG_AMQP_AUTH_USER=amqp_user > CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER If it's a test environment, you can try addint the user manually, maybe: $ sudo rabbitmqctl add_user {username} {password} Ensure it's added correctly: $ sudo rabbitmqctl list_users I remember once having to do that a while ago. -- /kashyap From jeckersb at redhat.com Thu Dec 4 14:34:49 2014 From: jeckersb at redhat.com (John Eckersberg) Date: Thu, 04 Dec 2014 09:34:49 -0500 Subject: [Rdo-list] RabbitMQ issue when starting Nova service on compute node In-Reply-To: <547FA871.9090902@redhat.com> References: <547FA871.9090902@redhat.com> Message-ID: <8761drmqk6.fsf@redhat.com> Dave Neary writes: > Hi all, > > I hit an issue today when installing RDO on 3 nodes (VMs) - when I got > to the point of starting the Nova service on the compute nodes, the > install crapped out. Logs weren't helpful, but some webs earching > uncovered this Ask question: > https://ask.openstack.org/en/question/48329/openstack-juno-using-rdo-fails-installation-amqp-server-closed-the-connection/ > > It turns out that the default RabbitMQ config file does not allow remote > connection of "guest/guest" user. > https://www.rabbitmq.com/access-control.html > > I created a bug for the issue here (didn't find one before): > https://bugzilla.redhat.com/show_bug.cgi?id=1170385 > > The fixes would be straightforward: use a non-guest AMQP user and > password, or enable remote connection for the RabbitMQ guest user. But I > can't figure out how to do either of those - I don't think that > > CONFIG_AMQP_AUTH_USER=amqp_user > CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER > > in the answer file are what I'm looking for, I don't see any way to > update the RabbitMQ config file in amqp.pp > > There is another optiuon: use qpid as the AMQP provider - but I wanted > to do a default install if at all possible. > > Has anyone else hit this issue, and how did you get past it? > > Thanks, > Dave. If you want to just turn the guest account back on, you could update wherever the top-level rabbitmq puppet class gets called in packstack and set something like... config_variables => {'loopback_users' => '[]'} eck From zhenhua2000 at gmail.com Thu Dec 4 15:08:09 2014 From: zhenhua2000 at gmail.com (Zhang Zhenhua) Date: Thu, 4 Dec 2014 23:08:09 +0800 Subject: [Rdo-list] install multiple controller nodes by packstack Message-ID: Dear all, I am using packstack to deploy a new private cloud now. Does packstack support to deploy two or more controller node? I mean I just want to deploy a minimal 'just as work' HA testbed for our private cloud. Regards, Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Thu Dec 4 15:12:10 2014 From: pmyers at redhat.com (Perry Myers) Date: Thu, 04 Dec 2014 10:12:10 -0500 Subject: [Rdo-list] install multiple controller nodes by packstack In-Reply-To: References: Message-ID: <548079CA.4090804@redhat.com> On 12/04/2014 10:08 AM, Zhang Zhenhua wrote: > Dear all, > > I am using packstack to deploy a new private cloud now. Does packstack > support to deploy two or more controller node? I mean I just want to > deploy a minimal 'just as work' HA testbed for our private cloud. Packstack is meant for demos and simple installs, and as such it doesn't have any support or understanding for deploying OpenStack in an HA configuration. Perry From rbowen at redhat.com Thu Dec 4 15:16:19 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 04 Dec 2014 10:16:19 -0500 Subject: [Rdo-list] install multiple controller nodes by packstack In-Reply-To: References: Message-ID: <54807AC3.9090606@redhat.com> On 12/04/2014 10:08 AM, Zhang Zhenhua wrote: > Dear all, > > I am using packstack to deploy a new private cloud now. Does packstack > support to deploy two or more controller node? I mean I just want to > deploy a minimal 'just as work' HA testbed for our private cloud. > We've collected a variety of docs on setting up HA OpenStack at https://openstack.redhat.com/Setting-up-High-Availability -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From danofsatx at gmail.com Thu Dec 4 16:29:31 2014 From: danofsatx at gmail.com (Dan Mossor) Date: Thu, 04 Dec 2014 10:29:31 -0600 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch Message-ID: <54808BEB.5000200@gmail.com> Howdy folks! I am still trying to get an Openstack deployment working using packstack. I've done a lot of reading, but apparently not quite enough since I can't seem to get my compute nodes to talk to the network. Any pointers anyone can give would be *greatly* appreciated. Here's the setup: Controller - 1 NIC, enp0s25 Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0 I wanted to deploy the neutron services to the compute nodes to take advantage of the bonded interfaces. The trouble is, I don't think I have my answer file [1] set up properly yet. After the packstack deployment, this is what I have on node3 (I'm going to concentrate solely on this system, as the only difference in node4 is one of the physical interface names). [root at node3 ~]# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff 3: enp1s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 4: enp3s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 5: bond0: mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 7: ovs-system: mtu 1500 qdisc noop state DOWN mode DEFAULT link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff 8: br-int: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff 11: br-ex: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff 12: br-bond0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 13: br-tun: mtu 1500 qdisc noop state DOWN mode DEFAULT link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff [root at node3 ~]# ovs-vsctl show ca6d23ad-c88e-48db-9ace-6a3aff767460 Bridge br-ex Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port "vxlan-0a010168" Interface "vxlan-0a010168" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"} Bridge "br-bond0" Port "phy-br-bond0" Interface "phy-br-bond0" type: patch options: {peer="int-br-bond0"} Port "bond0" Interface "bond0" Port "br-bond0" Interface "br-bond0" type: internal Bridge br-int fail_mode: secure Port "int-br-bond0" Interface "int-br-bond0" type: patch options: {peer="phy-br-bond0"} Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} ovs_version: "2.1.3" The trouble lies in the fact that I have NO IDEA how to use openvirtualswitch. None. This ovs-vsctl output is foreign to me, and makes no sense. At the very least, I'm simply looking for a good reference - so far, I've not been able to find decent documentation. Does it exist? Thanks, Dan [1] http://fpaste.org/156624/ -- Dan Mossor, RHCSA Systems Engineer at Large Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice FAS: dmossor IRC: danofsatx San Antonio, Texas, USA From vmindru at redhat.com Thu Dec 4 16:35:10 2014 From: vmindru at redhat.com (Veaceslav (Slava) Mindru) Date: Thu, 4 Dec 2014 17:35:10 +0100 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch In-Reply-To: <54808BEB.5000200@gmail.com> References: <54808BEB.5000200@gmail.com> Message-ID: <20141204163510.GC27197@redhat.com> Hi did you try this one ? https://openstack.redhat.com/Neutron_with_existing_external_network I think you are missing your external NIC to be part of br-ex. VM On 04/12/14 10:29 -0600, Dan Mossor wrote: >Howdy folks! > >I am still trying to get an Openstack deployment working using >packstack. I've done a lot of reading, but apparently not quite enough >since I can't seem to get my compute nodes to talk to the network. Any >pointers anyone can give would be *greatly* appreciated. > >Here's the setup: >Controller - 1 NIC, enp0s25 >Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 >Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0 > >I wanted to deploy the neutron services to the compute nodes to take >advantage of the bonded interfaces. The trouble is, I don't think I >have my answer file [1] set up properly yet. > >After the packstack deployment, this is what I have on node3 (I'm >going to concentrate solely on this system, as the only difference in >node4 is one of the physical interface names). > >[root at node3 ~]# ip link show >1: lo: mtu 65536 qdisc noqueue state UNKNOWN >mode DEFAULT > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >2: enp0s25: mtu 1500 qdisc >pfifo_fast state UP mode DEFAULT qlen 1000 > link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff >3: enp1s0: mtu 1500 qdisc >pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >4: enp3s0: mtu 1500 qdisc >pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >5: bond0: mtu 1500 qdisc >noqueue master ovs-system state UP mode DEFAULT > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >7: ovs-system: mtu 1500 qdisc noop state DOWN >mode DEFAULT > link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff >8: br-int: mtu 1500 qdisc noqueue >state UNKNOWN mode DEFAULT > link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff >11: br-ex: mtu 1500 qdisc noqueue >state UNKNOWN mode DEFAULT > link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff >12: br-bond0: mtu 1500 qdisc noqueue >state UNKNOWN mode DEFAULT > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >13: br-tun: mtu 1500 qdisc noop state DOWN mode >DEFAULT > link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff >[root at node3 ~]# ovs-vsctl show >ca6d23ad-c88e-48db-9ace-6a3aff767460 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port br-tun > Interface br-tun > type: internal > Port "vxlan-0a010168" > Interface "vxlan-0a010168" > type: vxlan > options: {df_default="true", in_key=flow, >local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"} > Bridge "br-bond0" > Port "phy-br-bond0" > Interface "phy-br-bond0" > type: patch > options: {peer="int-br-bond0"} > Port "bond0" > Interface "bond0" > Port "br-bond0" > Interface "br-bond0" > type: internal > Bridge br-int > fail_mode: secure > Port "int-br-bond0" > Interface "int-br-bond0" > type: patch > options: {peer="phy-br-bond0"} > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > ovs_version: "2.1.3" > > >The trouble lies in the fact that I have NO IDEA how to use >openvirtualswitch. None. This ovs-vsctl output is foreign to me, and >makes no sense. > >At the very least, I'm simply looking for a good reference - so far, >I've not been able to find decent documentation. Does it exist? > >Thanks, >Dan > >[1] http://fpaste.org/156624/ > >-- >Dan Mossor, RHCSA >Systems Engineer at Large >Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG >Fedora Infrastructure Apprentice >FAS: dmossor IRC: danofsatx >San Antonio, Texas, USA > >_______________________________________________ >Rdo-list mailing list >Rdo-list at redhat.com >https://www.redhat.com/mailman/listinfo/rdo-list From danofsatx at gmail.com Thu Dec 4 16:50:48 2014 From: danofsatx at gmail.com (Dan Mossor) Date: Thu, 04 Dec 2014 10:50:48 -0600 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch In-Reply-To: <20141204163510.GC27197@redhat.com> References: <54808BEB.5000200@gmail.com> <20141204163510.GC27197@redhat.com> Message-ID: <548090E8.5090103@gmail.com> On 12/04/2014 10:35 AM, Veaceslav (Slava) Mindru wrote: > Hi > did you try this one ? > > https://openstack.redhat.com/Neutron_with_existing_external_network > > I think you are missing your external NIC to be part of br-ex. > > VM > Hmmm....it appears packstack did something here. Both of my physical interfaces still point to bond0 as their master - should they point to br-bond0? [root at node3 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-bond0 ONBOOT=yes BOOTPROTO=none [root at node3 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-bond0 UUID=9dfa09a2-309a-4f4f-967c-37d931748f70 NAME=bond0 ONBOOT=yes BONDING_MASTER=yes BONDING_OPTS="miimon=100 mode=802.3ad" DEFROUTE=no PEERDNS=no PEERROUTES=no IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=no IPV6_DEFROUTE=no IPV6_PEERDNS=no IPV6_PEERROUTES=no IPV6_FAILURE_FATAL=no DEVICE=br-bond0 DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge [root at node3 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp1s0 TYPE=Ethernet HWADDR=00:1B:21:AB:D5:1A UUID=59a1a7a0-4023-494f-91bb-6865c9a966bd NAME=eth1 ONBOOT=yes MASTER=bond0 SLAVE=yes [root at node3 ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp3s0 TYPE=Ethernet HWADDR=00:1B:21:AB:D6:A1 UUID=e5dfe837-ae88-482e-a6f6-2aafcea36cdc NAME=eth2 ONBOOT=yes MASTER=bond0 SLAVE=yes -- Dan Mossor, RHCSA Systems Engineer at Large Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice FAS: dmossor IRC: danofsatx San Antonio, Texas, USA From Brian.Afshar at emc.com Thu Dec 4 16:55:25 2014 From: Brian.Afshar at emc.com (Afshar, Brian) Date: Thu, 4 Dec 2014 11:55:25 -0500 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch In-Reply-To: <54808BEB.5000200@gmail.com> References: <54808BEB.5000200@gmail.com> Message-ID: <9E8EE5E176B2BD49913B2F69B369AD8302126D0C36@MX02A.corp.emc.com> As for your answers.txt file, if you haven't followed these steps, make sure that you can ping your compute node(s) from your controller node first, then follow these commands: # yum install openstack-packstack -y # packstack --gen-answer-file=openstack-answers.txt Once your answers.txt file is generated, you will need to edit it (vi) and provide information about your node(s). Hope that gives you a running start...at least! Regards, Brian -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Dan Mossor Sent: Thursday, December 04, 2014 8:30 AM To: rdo-list at redhat.com Subject: [Rdo-list] Packstack, Neutron, and Openvswitch Howdy folks! I am still trying to get an Openstack deployment working using packstack. I've done a lot of reading, but apparently not quite enough since I can't seem to get my compute nodes to talk to the network. Any pointers anyone can give would be *greatly* appreciated. Here's the setup: Controller - 1 NIC, enp0s25 Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0 I wanted to deploy the neutron services to the compute nodes to take advantage of the bonded interfaces. The trouble is, I don't think I have my answer file [1] set up properly yet. After the packstack deployment, this is what I have on node3 (I'm going to concentrate solely on this system, as the only difference in node4 is one of the physical interface names). [root at node3 ~]# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp0s25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff 3: enp1s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 4: enp3s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 5: bond0: mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 7: ovs-system: mtu 1500 qdisc noop state DOWN mode DEFAULT link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff 8: br-int: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff 11: br-ex: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff 12: br-bond0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff 13: br-tun: mtu 1500 qdisc noop state DOWN mode DEFAULT link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff [root at node3 ~]# ovs-vsctl show ca6d23ad-c88e-48db-9ace-6a3aff767460 Bridge br-ex Port br-ex Interface br-ex type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port "vxlan-0a010168" Interface "vxlan-0a010168" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"} Bridge "br-bond0" Port "phy-br-bond0" Interface "phy-br-bond0" type: patch options: {peer="int-br-bond0"} Port "bond0" Interface "bond0" Port "br-bond0" Interface "br-bond0" type: internal Bridge br-int fail_mode: secure Port "int-br-bond0" Interface "int-br-bond0" type: patch options: {peer="phy-br-bond0"} Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} ovs_version: "2.1.3" The trouble lies in the fact that I have NO IDEA how to use openvirtualswitch. None. This ovs-vsctl output is foreign to me, and makes no sense. At the very least, I'm simply looking for a good reference - so far, I've not been able to find decent documentation. Does it exist? Thanks, Dan [1] http://fpaste.org/156624/ -- Dan Mossor, RHCSA Systems Engineer at Large Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice FAS: dmossor IRC: danofsatx San Antonio, Texas, USA _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From tom at buskey.name Thu Dec 4 18:28:15 2014 From: tom at buskey.name (Tom Buskey) Date: Thu, 4 Dec 2014 13:28:15 -0500 Subject: [Rdo-list] Does Icehouse work with Centos 6.6? In-Reply-To: <20141203161006.GC26681@strider.cdg.redhat.com> References: <20141203154859.GA26681@strider.cdg.redhat.com> <20141203160054.GB26681@strider.cdg.redhat.com> <20141203161006.GC26681@strider.cdg.redhat.com> Message-ID: I've been building with https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm on CentOS 6.5 that did a yum update to 6.6 after packstack just fine. On Wed, Dec 3, 2014 at 11:10 AM, Ga?l Chamoulaud wrote: > On 03/Dec/2014 @ 17:00, Ga?l Chamoulaud wrote: > > On 03/Dec/2014 @ 16:48, Ga?l Chamoulaud wrote: > > > On 03/Dec/2014 @ 10:00, Saurabh Talwar wrote: > > > > Thanks for your response Miguel! Actually I reformatted the hard > drive so I > > > > lost my log file. Really sorry about that. > > > > > > > > > > > > > > > > Now I have a different problem. Dependency issues which are > preventing me to > > > > install openstack-packstack. > > > > > > > > > > > > > > > > [root at tm04 mysql]# yum install openstack-packstack > > > > > > > > Loaded plugins: fastestmirror, refresh-packagekit, security > > > > > > > > Setting up Install Process > > > > > > > > Loading mirror speeds from cached hostfile > > > > > > > > * base: repos.lax.quadranet.com > > > > > > > > * extras: mirror.keystealth.org > > > > > > > > * updates: mirror.anl.gov > > > > > > > > Resolving Dependencies > > > > > > > > --> Running transaction check > > > > > > > > ---> Package openstack-packstack.noarch 0:2014.1.1-0.30.dev1258.el6 > will be > > > > installed > > > > > > > > --> Processing Dependency: openstack-packstack-puppet = > > > > 2014.1.1-0.30.dev1258.el6 for package: > > > > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > > > > > > > --> Processing Dependency: openstack-puppet-modules for package: > > > > openstack-packstack-2014.1.1-0.30.dev1258.el6.noarch > > > > > > > > --> Running transaction check > > > > > > > > ---> Package openstack-packstack-puppet.noarch > 0:2014.1.1-0.30.dev1258.el6 will > > > > be installed > > > > > > > > ---> Package openstack-puppet-modules.noarch 0:2014.1-25.el6 will be > installed > > > > > > > > --> Processing Dependency: rubygem-json for package: > > > > openstack-puppet-modules-2014.1-25.el6.noarch > > > > > > > > --> Finished Dependency Resolution > > > > > > > > Error: Package: openstack-puppet-modules-2014.1-25.el6.noarch > > > > (openstactack-icehouse) > > > > > > > > Requires: rubygem-json > > > > > > > > You could try using --skip-broken to work around the problem > > > > > > > > You could try running: rpm -Va --nofiles --nodigest > > > > > > > > [root at tm04 mysql]# cat /etc/redhat-release > > > > > > > > CentOS release 6.6 (Final) > > > > > > > > > > Hi Saurabh, > > > > > > I think you forgot to reinstall rdo-icehouse rpm on your new machine > ;-) > > > > > > $> yum -y install http://goo.gl/Y0VxSq > > > > > > After this, it should be better ! > > > > > > > Or if you really reinstalled rdo-icehouse rpm, try to rebuild the yum > cache ? > > > > >$ yum clean all && yum makecache > > > > > BTW, this link https://rdo.fedorapeople.org/rdo-release.rpm is pointing > to the > last RDO release, Juno-1 and this version is not supported on > RHEL6.x/CentOS6.x. > > So it would be better to install the rdo icehouse rpm release you can find > here [1]: > > [1] - > https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm > > -- > Ga?l Chamoulaud > Openstack Engineering > Mail: [gchamoul|gael] at redhat dot com > IRC: strider/gchamoul (Red Hat), gchamoul (Freenode) > GnuPG Key ID: 7F4B301 > C75F 15C2 A7FD EBC3 7B2D CE41 0077 6A4B A7F4 B301 > > Freedom...Courage...Commitment...Accountability > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Thu Dec 4 19:25:50 2014 From: dneary at redhat.com (Dave Neary) Date: Thu, 04 Dec 2014 14:25:50 -0500 Subject: [Rdo-list] RabbitMQ issue when starting Nova service on compute node In-Reply-To: <8761drmqk6.fsf@redhat.com> References: <547FA871.9090902@redhat.com> <8761drmqk6.fsf@redhat.com> Message-ID: <5480B53E.3070202@redhat.com> Hi John, On 12/04/2014 09:34 AM, John Eckersberg wrote: > Dave Neary writes: >> The fixes would be straightforward: use a non-guest AMQP user and >> password, or enable remote connection for the RabbitMQ guest user. But I >> can't figure out how to do either of those - I don't think that >> >> CONFIG_AMQP_AUTH_USER=amqp_user >> CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER >> >> in the answer file are what I'm looking for, I don't see any way to >> update the RabbitMQ config file in amqp.pp > If you want to just turn the guest account back on, you could update > wherever the top-level rabbitmq puppet class gets called in packstack > and set something like... > > config_variables => {'loopback_users' => '[]'} Thanks John! Turned out the main issue was that I was installing this in OpenStack instances, and forgot about the security group rules. Since those get enforced in the host, not in the guest, it was invisible to me - iptables looked fine. I did add that line to rabbitmq.conf, with Dan Radez's help, in /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp in the rabbitmp class. I still have not had a successful run, but I've been hitting a different issue each time. My latest issue was due to using floating IP addresses for the hosts - mongodb would not bind to that address - so I had to switch to the internal IP addresses (unfortunately, as I understand those will not stay the same over time). After resolving that, I have now hit an issue with Swift ring failing to rebalance. I have no idea what that means or how to fix it, the information I have suggests that nuking from orbit and restarting is the best approach. Thanks, Dave. -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From danofsatx at gmail.com Thu Dec 4 20:04:38 2014 From: danofsatx at gmail.com (Dan Mossor) Date: Thu, 04 Dec 2014 14:04:38 -0600 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch In-Reply-To: <9E8EE5E176B2BD49913B2F69B369AD8302126D0C36@MX02A.corp.emc.com> References: <54808BEB.5000200@gmail.com> <9E8EE5E176B2BD49913B2F69B369AD8302126D0C36@MX02A.corp.emc.com> Message-ID: <5480BE56.4030209@gmail.com> I've already created the answer file - http://fpaste.org/156624/ Packstack has already run, and deployed to my systems. My problem is that I still have no network connectivity, other than the management network - packstack is not configuring ovs to talk to the bond0 interface, or I'm doing something wrong. This is what I'm trying to figure out. Dan On 12/04/2014 10:55 AM, Afshar, Brian wrote: > As for your answers.txt file, if you haven't followed these steps, make sure that you can ping your compute node(s) from your controller node first, then follow these commands: > > # yum install openstack-packstack -y > # packstack --gen-answer-file=openstack-answers.txt > > Once your answers.txt file is generated, you will need to edit it (vi) and provide information about your node(s). > > Hope that gives you a running start...at least! > > > Regards, > > Brian > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Dan Mossor > Sent: Thursday, December 04, 2014 8:30 AM > To: rdo-list at redhat.com > Subject: [Rdo-list] Packstack, Neutron, and Openvswitch > > Howdy folks! > > I am still trying to get an Openstack deployment working using packstack. I've done a lot of reading, but apparently not quite enough since I can't seem to get my compute nodes to talk to the network. Any pointers anyone can give would be *greatly* appreciated. > > Here's the setup: > Controller - 1 NIC, enp0s25 > Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0 > > I wanted to deploy the neutron services to the compute nodes to take advantage of the bonded interfaces. The trouble is, I don't think I have my answer file [1] set up properly yet. > > After the packstack deployment, this is what I have on node3 (I'm going to concentrate solely on this system, as the only difference in node4 is one of the physical interface names). > > [root at node3 ~]# ip link show > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > 2: enp0s25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 > link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff > 3: enp1s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 4: enp3s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 5: bond0: mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 7: ovs-system: mtu 1500 qdisc noop state DOWN mode DEFAULT > link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff > 8: br-int: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT > link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff > 11: br-ex: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT > link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff > 12: br-bond0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 13: br-tun: mtu 1500 qdisc noop state DOWN mode DEFAULT > link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff > [root at node3 ~]# ovs-vsctl show > ca6d23ad-c88e-48db-9ace-6a3aff767460 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port br-tun > Interface br-tun > type: internal > Port "vxlan-0a010168" > Interface "vxlan-0a010168" > type: vxlan > options: {df_default="true", in_key=flow, local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"} > Bridge "br-bond0" > Port "phy-br-bond0" > Interface "phy-br-bond0" > type: patch > options: {peer="int-br-bond0"} > Port "bond0" > Interface "bond0" > Port "br-bond0" > Interface "br-bond0" > type: internal > Bridge br-int > fail_mode: secure > Port "int-br-bond0" > Interface "int-br-bond0" > type: patch > options: {peer="phy-br-bond0"} > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > ovs_version: "2.1.3" > > > The trouble lies in the fact that I have NO IDEA how to use openvirtualswitch. None. This ovs-vsctl output is foreign to me, and makes no sense. > > At the very least, I'm simply looking for a good reference - so far, I've not been able to find decent documentation. Does it exist? > > Thanks, > Dan > > [1] http://fpaste.org/156624/ > > -- > Dan Mossor, RHCSA > Systems Engineer at Large > Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice > FAS: dmossor IRC: danofsatx > San Antonio, Texas, USA > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Dan Mossor, RHCSA Systems Engineer at Large Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice FAS: dmossor IRC: danofsatx San Antonio, Texas, USA From Brian.Afshar at emc.com Thu Dec 4 22:07:44 2014 From: Brian.Afshar at emc.com (Afshar, Brian) Date: Thu, 4 Dec 2014 17:07:44 -0500 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch In-Reply-To: <5480BE56.4030209@gmail.com> References: <54808BEB.5000200@gmail.com> <9E8EE5E176B2BD49913B2F69B369AD8302126D0C36@MX02A.corp.emc.com> <5480BE56.4030209@gmail.com> Message-ID: <9E8EE5E176B2BD49913B2F69B369AD8302126D0CC1@MX02A.corp.emc.com> Hi Dan, Take a look at your nova.conf file and make sure that your Controller name or IP address is listed correctly. I need more information in order to figure out where the network connection dropped on your systems. From your information it is hard to figure out what went wrong. Are you using CentOS or RHEL and which version? Regards, Brian -----Original Message----- From: Dan Mossor [mailto:danofsatx at gmail.com] Sent: Thursday, December 04, 2014 12:05 PM To: Afshar, Brian; rdo-list at redhat.com Subject: Re: [Rdo-list] Packstack, Neutron, and Openvswitch I've already created the answer file - http://fpaste.org/156624/ Packstack has already run, and deployed to my systems. My problem is that I still have no network connectivity, other than the management network - packstack is not configuring ovs to talk to the bond0 interface, or I'm doing something wrong. This is what I'm trying to figure out. Dan On 12/04/2014 10:55 AM, Afshar, Brian wrote: > As for your answers.txt file, if you haven't followed these steps, make sure that you can ping your compute node(s) from your controller node first, then follow these commands: > > # yum install openstack-packstack -y > # packstack --gen-answer-file=openstack-answers.txt > > Once your answers.txt file is generated, you will need to edit it (vi) and provide information about your node(s). > > Hope that gives you a running start...at least! > > > Regards, > > Brian > > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Dan Mossor > Sent: Thursday, December 04, 2014 8:30 AM > To: rdo-list at redhat.com > Subject: [Rdo-list] Packstack, Neutron, and Openvswitch > > Howdy folks! > > I am still trying to get an Openstack deployment working using packstack. I've done a lot of reading, but apparently not quite enough since I can't seem to get my compute nodes to talk to the network. Any pointers anyone can give would be *greatly* appreciated. > > Here's the setup: > Controller - 1 NIC, enp0s25 > Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0 > > I wanted to deploy the neutron services to the compute nodes to take advantage of the bonded interfaces. The trouble is, I don't think I have my answer file [1] set up properly yet. > > After the packstack deployment, this is what I have on node3 (I'm going to concentrate solely on this system, as the only difference in node4 is one of the physical interface names). > > [root at node3 ~]# ip link show > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > 2: enp0s25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 > link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff > 3: enp1s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 4: enp3s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 5: bond0: mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 7: ovs-system: mtu 1500 qdisc noop state DOWN mode DEFAULT > link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff > 8: br-int: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT > link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff > 11: br-ex: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT > link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff > 12: br-bond0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT > link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff > 13: br-tun: mtu 1500 qdisc noop state DOWN mode DEFAULT > link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff > [root at node3 ~]# ovs-vsctl show > ca6d23ad-c88e-48db-9ace-6a3aff767460 > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Bridge br-tun > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port br-tun > Interface br-tun > type: internal > Port "vxlan-0a010168" > Interface "vxlan-0a010168" > type: vxlan > options: {df_default="true", in_key=flow, local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"} > Bridge "br-bond0" > Port "phy-br-bond0" > Interface "phy-br-bond0" > type: patch > options: {peer="int-br-bond0"} > Port "bond0" > Interface "bond0" > Port "br-bond0" > Interface "br-bond0" > type: internal > Bridge br-int > fail_mode: secure > Port "int-br-bond0" > Interface "int-br-bond0" > type: patch > options: {peer="phy-br-bond0"} > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > ovs_version: "2.1.3" > > > The trouble lies in the fact that I have NO IDEA how to use openvirtualswitch. None. This ovs-vsctl output is foreign to me, and makes no sense. > > At the very least, I'm simply looking for a good reference - so far, I've not been able to find decent documentation. Does it exist? > > Thanks, > Dan > > [1] http://fpaste.org/156624/ > > -- > Dan Mossor, RHCSA > Systems Engineer at Large > Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice > FAS: dmossor IRC: danofsatx > San Antonio, Texas, USA > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Dan Mossor, RHCSA Systems Engineer at Large Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice FAS: dmossor IRC: danofsatx San Antonio, Texas, USA From swapnil at linux.com Fri Dec 5 05:39:17 2014 From: swapnil at linux.com (Swapnil Jain) Date: Fri, 5 Dec 2014 11:09:17 +0530 Subject: [Rdo-list] install multiple controller nodes by packstack In-Reply-To: <54807AC3.9090606@redhat.com> References: <54807AC3.9090606@redhat.com> Message-ID: <89A5C579-64BD-4FBD-8ABF-3005DE4E3A15@Linux.com> This simple diagram explains a lot. ? Swapnil Jain | Swapnil at Linux.com RHC{A,DS,E,VA}, CC{DA,NA}, MCSE, CNE > On 04-Dec-2014, at 8:46 pm, Rich Bowen wrote: > > > > On 12/04/2014 10:08 AM, Zhang Zhenhua wrote: >> Dear all, >> >> I am using packstack to deploy a new private cloud now. Does packstack >> support to deploy two or more controller node? I mean I just want to >> deploy a minimal 'just as work' HA testbed for our private cloud. >> > > > We've collected a variety of docs on setting up HA OpenStack at https://openstack.redhat.com/Setting-up-High-Availability > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: HA_Architecture-collapsed.png Type: image/png Size: 70603 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Twitter-22x17-left.gif Type: image/gif Size: 1150 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Blog-22x17-left.gif Type: image/gif Size: 1099 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Facebook-22x17-left.gif Type: image/gif Size: 1127 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LinkedIn-22x17-left.gif Type: image/gif Size: 1057 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Skype-21x17-left.gif Type: image/gif Size: 1102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From rbowen at redhat.com Fri Dec 5 14:27:25 2014 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 05 Dec 2014 09:27:25 -0500 Subject: [Rdo-list] CentOS Community test infrastructure Message-ID: <5481C0CD.6040708@redhat.com> In Paris, we talked about what we might do to get more of RDO testing/ci out into the open, and a great opportunity has arisen out of that conversation. You can read the full details on the centos-devel list, at http://lists.centos.org/pipermail/centos-devel/2014-December/012454.html In short, this is an opportunity to move some or all of our testing/CI onto community infrastructure, and get more engagement from the CentOS community, as well as make it easier for you, the RDO community, to participate in things that have, up until now, been handled mostly by people inside Red Hat. This is what you told us, in Paris, you wanted to have more access to and visibility into. If you're interested in participating in this effort, please have a look at http://wiki.centos.org/QaWiki/PubHardware , get subscribed to centos-devel - http://lists.centos.org/mailman/listinfo/centos-devel - and jump in. --Rich -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From danofsatx at gmail.com Fri Dec 5 17:10:24 2014 From: danofsatx at gmail.com (Dan Mossor) Date: Fri, 05 Dec 2014 11:10:24 -0600 Subject: [Rdo-list] Packstack, Neutron, and Openvswitch In-Reply-To: <9E8EE5E176B2BD49913B2F69B369AD8302126D0CC1@MX02A.corp.emc.com> References: <54808BEB.5000200@gmail.com> <9E8EE5E176B2BD49913B2F69B369AD8302126D0C36@MX02A.corp.emc.com> <5480BE56.4030209@gmail.com> <9E8EE5E176B2BD49913B2F69B369AD8302126D0CC1@MX02A.corp.emc.com> Message-ID: <5481E700.6000807@gmail.com> Brian, Thanks so far - I seem to have forgotten to say that this is all on CentOS 7. My nova.conf and neutron.conf files both appear to be configured correctly. I'm fairly certain that the problem lies in the ovs configuration, but I don't know where. What other information do you need? Regards, Dan On 12/04/2014 04:07 PM, Afshar, Brian wrote: > Hi Dan, > > Take a look at your nova.conf file and make sure that your Controller name or IP address is listed correctly. I need more information in order to figure out where the network connection dropped on your systems. From your information it is hard to figure out what went wrong. Are you using CentOS or RHEL and which version? > > > Regards, > > Brian > > > -----Original Message----- > From: Dan Mossor [mailto:danofsatx at gmail.com] > Sent: Thursday, December 04, 2014 12:05 PM > To: Afshar, Brian; rdo-list at redhat.com > Subject: Re: [Rdo-list] Packstack, Neutron, and Openvswitch > > I've already created the answer file - http://fpaste.org/156624/ > > Packstack has already run, and deployed to my systems. My problem is that I still have no network connectivity, other than the management network - packstack is not configuring ovs to talk to the bond0 interface, or I'm doing something wrong. This is what I'm trying to figure out. > > Dan > > > On 12/04/2014 10:55 AM, Afshar, Brian wrote: >> As for your answers.txt file, if you haven't followed these steps, make sure that you can ping your compute node(s) from your controller node first, then follow these commands: >> >> # yum install openstack-packstack -y >> # packstack --gen-answer-file=openstack-answers.txt >> >> Once your answers.txt file is generated, you will need to edit it (vi) and provide information about your node(s). >> >> Hope that gives you a running start...at least! >> >> >> Regards, >> >> Brian >> >> -----Original Message----- >> From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Dan Mossor >> Sent: Thursday, December 04, 2014 8:30 AM >> To: rdo-list at redhat.com >> Subject: [Rdo-list] Packstack, Neutron, and Openvswitch >> >> Howdy folks! >> >> I am still trying to get an Openstack deployment working using packstack. I've done a lot of reading, but apparently not quite enough since I can't seem to get my compute nodes to talk to the network. Any pointers anyone can give would be *greatly* appreciated. >> >> Here's the setup: >> Controller - 1 NIC, enp0s25 >> Compute Node node3: 3 NICs. enp0s25 mgmt, enp1s0 and enp3s0 slaved to bond0 Compute Node node4: 3 NICs. enp0s25 mgmt, enp1s0 and enp2s0 slaved to bond0 >> >> I wanted to deploy the neutron services to the compute nodes to take advantage of the bonded interfaces. The trouble is, I don't think I have my answer file [1] set up properly yet. >> >> After the packstack deployment, this is what I have on node3 (I'm going to concentrate solely on this system, as the only difference in node4 is one of the physical interface names). >> >> [root at node3 ~]# ip link show >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> 2: enp0s25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 >> link/ether 00:22:19:30:67:04 brd ff:ff:ff:ff:ff:ff >> 3: enp1s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 >> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >> 4: enp3s0: mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 >> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >> 5: bond0: mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT >> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >> 7: ovs-system: mtu 1500 qdisc noop state DOWN mode DEFAULT >> link/ether 76:2d:a5:ea:77:58 brd ff:ff:ff:ff:ff:ff >> 8: br-int: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT >> link/ether e6:ff:b9:c0:85:47 brd ff:ff:ff:ff:ff:ff >> 11: br-ex: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT >> link/ether 7a:74:54:18:6d:45 brd ff:ff:ff:ff:ff:ff >> 12: br-bond0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT >> link/ether 00:1b:21:ab:d5:1a brd ff:ff:ff:ff:ff:ff >> 13: br-tun: mtu 1500 qdisc noop state DOWN mode DEFAULT >> link/ether 72:58:fa:b0:8c:45 brd ff:ff:ff:ff:ff:ff >> [root at node3 ~]# ovs-vsctl show >> ca6d23ad-c88e-48db-9ace-6a3aff767460 >> Bridge br-ex >> Port br-ex >> Interface br-ex >> type: internal >> Bridge br-tun >> Port patch-int >> Interface patch-int >> type: patch >> options: {peer=patch-tun} >> Port br-tun >> Interface br-tun >> type: internal >> Port "vxlan-0a010168" >> Interface "vxlan-0a010168" >> type: vxlan >> options: {df_default="true", in_key=flow, local_ip="10.1.1.103", out_key=flow, remote_ip="10.1.1.104"} >> Bridge "br-bond0" >> Port "phy-br-bond0" >> Interface "phy-br-bond0" >> type: patch >> options: {peer="int-br-bond0"} >> Port "bond0" >> Interface "bond0" >> Port "br-bond0" >> Interface "br-bond0" >> type: internal >> Bridge br-int >> fail_mode: secure >> Port "int-br-bond0" >> Interface "int-br-bond0" >> type: patch >> options: {peer="phy-br-bond0"} >> Port br-int >> Interface br-int >> type: internal >> Port patch-tun >> Interface patch-tun >> type: patch >> options: {peer=patch-int} >> ovs_version: "2.1.3" >> >> >> The trouble lies in the fact that I have NO IDEA how to use openvirtualswitch. None. This ovs-vsctl output is foreign to me, and makes no sense. >> >> At the very least, I'm simply looking for a good reference - so far, I've not been able to find decent documentation. Does it exist? >> >> Thanks, >> Dan >> >> [1] http://fpaste.org/156624/ >> >> -- >> Dan Mossor, RHCSA >> Systems Engineer at Large >> Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice >> FAS: dmossor IRC: danofsatx >> San Antonio, Texas, USA >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > -- Dan Mossor, RHCSA Systems Engineer at Large Fedora Plasma Product WG | Fedora QA Team | Fedora Server SIG Fedora Infrastructure Apprentice FAS: dmossor IRC: danofsatx San Antonio, Texas, USA From dneary at redhat.com Sat Dec 6 01:44:36 2014 From: dneary at redhat.com (Dave Neary) Date: Fri, 05 Dec 2014 20:44:36 -0500 Subject: [Rdo-list] RabbitMQ issue when starting Nova service on compute node In-Reply-To: <5480B53E.3070202@redhat.com> References: <547FA871.9090902@redhat.com> <8761drmqk6.fsf@redhat.com> <5480B53E.3070202@redhat.com> Message-ID: <54825F84.6040704@redhat.com> Hi, For complete closure: the Swift issue was because there was an unfinished install already, when I wiped everything clean and started again, I got to a different error. That error was mongod not starting; that was a known issue, solved in a newer version of Packstack than what was in the RDO repo. The workaround I used was to disable Ceilometer for the installation, as that was the only think pulling in MongoDB. Thanks, Dave. On 12/04/2014 02:25 PM, Dave Neary wrote: > Hi John, > > On 12/04/2014 09:34 AM, John Eckersberg wrote: >> Dave Neary writes: >>> The fixes would be straightforward: use a non-guest AMQP user and >>> password, or enable remote connection for the RabbitMQ guest user. But I >>> can't figure out how to do either of those - I don't think that >>> >>> CONFIG_AMQP_AUTH_USER=amqp_user >>> CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER >>> >>> in the answer file are what I'm looking for, I don't see any way to >>> update the RabbitMQ config file in amqp.pp > > > >> If you want to just turn the guest account back on, you could update >> wherever the top-level rabbitmq puppet class gets called in packstack >> and set something like... >> >> config_variables => {'loopback_users' => '[]'} > > Thanks John! Turned out the main issue was that I was installing this in > OpenStack instances, and forgot about the security group rules. Since > those get enforced in the host, not in the guest, it was invisible to me > - iptables looked fine. > > I did add that line to rabbitmq.conf, with Dan Radez's help, in > /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp in > the rabbitmp class. I still have not had a successful run, but I've been > hitting a different issue each time. > > My latest issue was due to using floating IP addresses for the hosts - > mongodb would not bind to that address - so I had to switch to the > internal IP addresses (unfortunately, as I understand those will not > stay the same over time). > > After resolving that, I have now hit an issue with Swift ring failing to > rebalance. I have no idea what that means or how to fix it, the > information I have suggests that nuking from orbit and restarting is the > best approach. > > Thanks, > Dave. > > -- Dave Neary - NFV/SDN Community Strategy Open Source and Standards, Red Hat - http://community.redhat.com Ph: +1-978-399-2182 / Cell: +1-978-799-3338 From ak at cloudssky.com Sat Dec 6 14:49:46 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sat, 6 Dec 2014 15:49:46 +0100 Subject: [Rdo-list] install multiple controller nodes by packstack In-Reply-To: <89A5C579-64BD-4FBD-8ABF-3005DE4E3A15@Linux.com> References: <54807AC3.9090606@redhat.com> <89A5C579-64BD-4FBD-8ABF-3005DE4E3A15@Linux.com> Message-ID: Hi Swapnil, Is there any related article to the diagram which you kindly provided, which you could share? Thanks, Arash On Fri, Dec 5, 2014 at 6:39 AM, Swapnil Jain wrote: > This simple diagram explains a lot. > > > > ? > *Swapnil Jain | Swapnil at Linux.com * > RHC{A,DS,E,VA}, CC{DA,NA}, MCSE, CNE > > > > On 04-Dec-2014, at 8:46 pm, Rich Bowen wrote: > > > > On 12/04/2014 10:08 AM, Zhang Zhenhua wrote: > > Dear all, > > I am using packstack to deploy a new private cloud now. Does packstack > support to deploy two or more controller node? I mean I just want to > deploy a minimal 'just as work' HA testbed for our private cloud. > > > > We've collected a variety of docs on setting up HA OpenStack at > https://openstack.redhat.com/Setting-up-High-Availability > > > -- > Rich Bowen - rbowen at redhat.com > OpenStack Community Liaison > http://openstack.redhat.com/ > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Blog-22x17-left.gif Type: image/gif Size: 1099 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Facebook-22x17-left.gif Type: image/gif Size: 1127 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: HA_Architecture-collapsed.png Type: image/png Size: 70603 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Twitter-22x17-left.gif Type: image/gif Size: 1150 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Skype-21x17-left.gif Type: image/gif Size: 1102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LinkedIn-22x17-left.gif Type: image/gif Size: 1057 bytes Desc: not available URL: From swapnil at linux.com Mon Dec 8 06:17:37 2014 From: swapnil at linux.com (Swapnil Jain) Date: Mon, 8 Dec 2014 11:47:37 +0530 Subject: [Rdo-list] install multiple controller nodes by packstack In-Reply-To: References: <54807AC3.9090606@redhat.com> <89A5C579-64BD-4FBD-8ABF-3005DE4E3A15@Linux.com> Message-ID: <167FA4DD-6F88-4D9C-84BA-178E11C9DB63@Linux.com> Dear Arash, I found this on https://openstack.redhat.com/HA_Architecture there are tons of other information available, http://docs.openstack.org/high-availability-guide/content/ch-intro.html https://openstack.redhat.com/RDO_HighlyAvailable_and_LoadBalanced_Control_Services ? Swapnil Jain | Swapnil at Linux.com RHC{A,DS,E,VA}, CC{DA,NA}, MCSE, CNE > On 06-Dec-2014, at 8:19 pm, Arash Kaffamanesh wrote: > > Hi Swapnil, > > Is there any related article to the diagram which you kindly provided, which you could share? > > Thanks, > Arash > > On Fri, Dec 5, 2014 at 6:39 AM, Swapnil Jain > wrote: > This simple diagram explains a lot. > > > > > ? > Swapnil Jain | Swapnil at Linux.com > RHC{A,DS,E,VA}, CC{DA,NA}, MCSE, CNE > > > > >> On 04-Dec-2014, at 8:46 pm, Rich Bowen > wrote: >> >> >> >> On 12/04/2014 10:08 AM, Zhang Zhenhua wrote: >>> Dear all, >>> >>> I am using packstack to deploy a new private cloud now. Does packstack >>> support to deploy two or more controller node? I mean I just want to >>> deploy a minimal 'just as work' HA testbed for our private cloud. >>> >> >> >> We've collected a variety of docs on setting up HA OpenStack at https://openstack.redhat.com/Setting-up-High-Availability >> >> >> -- >> Rich Bowen - rbowen at redhat.com >> OpenStack Community Liaison >> http://openstack.redhat.com/ >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Twitter-22x17-left.gif Type: image/gif Size: 1150 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Blog-22x17-left.gif Type: image/gif Size: 1099 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Facebook-22x17-left.gif Type: image/gif Size: 1127 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LinkedIn-22x17-left.gif Type: image/gif Size: 1057 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Skype-21x17-left.gif Type: image/gif Size: 1102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: Message signed with OpenPGP using GPGMail URL: From apevec at gmail.com Mon Dec 8 16:13:43 2014 From: apevec at gmail.com (Alan Pevec) Date: Mon, 8 Dec 2014 17:13:43 +0100 Subject: [Rdo-list] [RFC] RDO packaging guide DRAFT Message-ID: Hi all, When Red Hat launched the RDO community in April 2013, we chose to focus our energies on making an OpenStack distribution available and encouraging a self-supporting community of users around the distribution, related tools, and supporting documentation. With this community now established, it is clear we need to prioritize opening up the RDO development process. Or, to put it another way, it is time to begin opening up the technical governance of RDO. We want this process to be discussed and fleshed out in more detail publicly on rdo-list at redhat.com and as a first step we have published a draft of RDO Packaging guide at http://redhat-openstack.github.io/openstack-packaging-doc/rdo-packaging.html Please review and provide feedback, or even send pull requests. Cheers, Alan From ak at cloudssky.com Mon Dec 8 16:38:49 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 8 Dec 2014 17:38:49 +0100 Subject: [Rdo-list] RabbitMQ issue when starting Nova service on compute node In-Reply-To: <5485C61F.3040202@redhat.com> References: <547FA871.9090902@redhat.com> <8761drmqk6.fsf@redhat.com> <5480B53E.3070202@redhat.com> <54825F84.6040704@redhat.com> <5485C61F.3040202@redhat.com> Message-ID: Hi Dave, Thanks for your kind feedback, and sorry that I emailed my question directly only to you without having rdo-list in Cc. By the way for our meetup this week: http://www.meetup.com/OpenStack-X/events/210803792/ I wrote a short blog post for RDO single line installer: http://cloudssky.com/en/blog/OpenStack-RDO-AIO-Single-Line-Installer/ and how to get Nova-Docker working on RDO Juno: http://cloudssky.com/en/blog/Nova-Docker-on-OpenStack-RDO-Juno/ Thanks! Arash On Mon, Dec 8, 2014 at 4:39 PM, Dave Neary wrote: > Hi, > > On 12/06/2014 04:04 AM, Arash Kaffamanesh wrote: > > How did you wiped your Juno deployment, using the hammer method? > > https://openstack.redhat.com/Uninstalling_RDO > > I did. > > > And for a multi node deployment, someone may run it on all nodes, right? > > Correct. > > At this point, the issue is known - I am running the latest version of > Packstack available for RHEL 7, but there is a newer version that fixes > this Mongo bug. When that gets updated I will update it, and re-run with > Ceilometer installed. > > For now, however, I don't need it, so I'm set. > > Thanks! > Dave. > > > On Sat, Dec 6, 2014 at 2:44 AM, Dave Neary > > wrote: > > > > Hi, > > > > For complete closure: the Swift issue was because there was an > > unfinished install already, when I wiped everything clean and started > > again, I got to a different error. > > > > That error was mongod not starting; that was a known issue, solved > in a > > newer version of Packstack than what was in the RDO repo. The > workaround > > I used was to disable Ceilometer for the installation, as that was > the > > only think pulling in MongoDB. > > > > Thanks, > > Dave. > > > > On 12/04/2014 02:25 PM, Dave Neary wrote: > > > Hi John, > > > > > > On 12/04/2014 09:34 AM, John Eckersberg wrote: > > >> Dave Neary > writes: > > >>> The fixes would be straightforward: use a non-guest AMQP user and > > >>> password, or enable remote connection for the RabbitMQ guest > > user. But I > > >>> can't figure out how to do either of those - I don't think that > > >>> > > >>> CONFIG_AMQP_AUTH_USER=amqp_user > > >>> CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER > > >>> > > >>> in the answer file are what I'm looking for, I don't see any way > to > > >>> update the RabbitMQ config file in amqp.pp > > > > > > > > > > > >> If you want to just turn the guest account back on, you could > update > > >> wherever the top-level rabbitmq puppet class gets called in > packstack > > >> and set something like... > > >> > > >> config_variables => {'loopback_users' => '[]'} > > > > > > Thanks John! Turned out the main issue was that I was installing > > this in > > > OpenStack instances, and forgot about the security group rules. > Since > > > those get enforced in the host, not in the guest, it was invisible > > to me > > > - iptables looked fine. > > > > > > I did add that line to rabbitmq.conf, with Dan Radez's help, in > > > > /usr/lib/python2.7/site-packages/packstack/puppet/templates/amqp.pp in > > > the rabbitmp class. I still have not had a successful run, but > > I've been > > > hitting a different issue each time. > > > > > > My latest issue was due to using floating IP addresses for the > hosts - > > > mongodb would not bind to that address - so I had to switch to the > > > internal IP addresses (unfortunately, as I understand those will > not > > > stay the same over time). > > > > > > After resolving that, I have now hit an issue with Swift ring > > failing to > > > rebalance. I have no idea what that means or how to fix it, the > > > information I have suggests that nuking from orbit and restarting > > is the > > > best approach. > > > > > > Thanks, > > > Dave. > > > > > > > > > > -- > > Dave Neary - NFV/SDN Community Strategy > > Open Source and Standards, Red Hat - http://community.redhat.com > > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > -- > Dave Neary - NFV/SDN Community Strategy > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +1-978-399-2182 / Cell: +1-978-799-3338 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Dec 8 17:29:49 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 08 Dec 2014 12:29:49 -0500 Subject: [Rdo-list] RDO/OpenStack Meetups coming up (8 Dec 2014) Message-ID: <5485E00D.2000607@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts will be gathering. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If you attend any of these meetups, please take pictures, and send me some. If you blog about the events (and you should), please send me that, too. * Tue Dec 9 2014 in Vancouver, BC, CA: Workshop: Avoiding Cloud Computing Planning & Implementation Failure - http://www.meetup.com/Vancouver-Enterprise-Cloud-Computing-Users-Group/events/218815702/ * Tue Dec 9 2014 in Stockholm, SE: 7th OpenStack User Group Nordics meetup - http://www.meetup.com/OpenStack-User-Group-Nordics/events/218754241/ * Tue Dec 9 2014 in Cambridge, MA, US: OpenStack (plus opening talk on Automation Runbooks) - http://www.meetup.com/bostonazure/events/197954432/ * Tue Dec 9 2014 in Santa Clara, CA, US: OpenStack Meetup with SwiftStack, Arista, and SolidFire - http://www.meetup.com/Arista-Networks-Silicon-Valley-User-Group/events/218893404/ * Tue Dec 9 2014 in Paris, FR: Meetup#12 PaaS avec OpenStack, Solum Docker OpenShift - http://www.meetup.com/OpenStack-France/events/218918493/ * Wed Dec 10 2014 in Mountain View, CA, US: Online Meetup: Automating OpenStack clouds and beyond w/ StackStorm - http://www.meetup.com/Cloud-Online-Meetup/events/218805038/ * Wed Dec 10 2014 in Mountain View, CA, US: Scalable Multi-tenant Logging, Metrics and Monitoring as a Service for OpenStack - http://www.meetup.com/Cloud-Platform-at-Symantec/events/218914623/ * Wed Dec 10 2014 in Mountain View, CA, US: What's New in GlusterFS 3.6 - http://www.meetup.com/GlusterFS-Silicon-Valley/events/180465522/ * Wed Dec 10 2014 in Washington, DC, US: OpenStackDC Meetup #16 - http://www.meetup.com/OpenStackDC/events/197814362/ * Wed Dec 10 2014 in Sebastopol, CA, US: Talk Night: IndieWeb, Trends in PHP, OpenStack - http://www.meetup.com/Hack-Sonoma-County/events/218663630/ * Thu Dec 11 2014 in K?ln, DE: OpenStack Distros And Deployment Workshop And More ... - http://www.meetup.com/OpenStack-X/events/210803792/ * Thu Dec 11 2014 in Pittsburgh, PA, US: Guest Speaker: Red Hat Director of OpenStack Engineering - http://www.meetup.com/openstack-pittsburgh/events/218784742/ * Thu Dec 11 2014 in Colombo, LK: How to deploy your own Private Cloud with OpenStack - http://www.meetup.com/Kolamba-Cloud-Meetup/events/219015539/ * Thu Dec 11 2014 in Herriman, UT, US: Deploying OpenStack Tenants, Networks and Instances with Ansible - http://www.meetup.com/openstack-utah/events/218786690/ * Fri Dec 12 2014 in M?xico City, MX: ?ltimo meetup de 2014 - http://www.meetup.com/Mexico-City-Cloud-Computing/events/218993444/ * Tue Dec 16 2014 in Austin, TX, US: CloudAustin December: The Twelve Clouds of Christmas - http://www.meetup.com/CloudAustin/events/212248062/ * Wed Dec 17 2014 in Montevideo, UY: nova boot --flavor m1.tiny meetup0 - http://www.meetup.com/OpenStack-Uruguay/events/219071916/ * Wed Dec 17 2014 in Portland, OR, US: OSNW Birthday: Beat the Holidays with an extra dose of knowledge - http://www.meetup.com/OpenStack-Northwest/events/218941697/ * Wed Dec 17 2014 in Berlin, DE: OpenStack DACH Day 2015: Vereinsgr?ndung - http://www.meetup.com/openstack-de/events/219117732/ * Thu Dec 18 2014 in New York, NY, US: "Is OpenStack ready for Enterprises?" - http://www.meetup.com/OpenStack-for-Enterprises-NYC/events/218900712/ * Thu Dec 18 2014 in Whittier, CA, US: Introduction to Red Hat and OpenShift (cohost with South Bay LAJUG) - http://www.meetup.com/Greater-Los-Angeles-Area-Red-Hat-User-Group-RHUG/events/217273042/ * Thu Dec 18 2014 in San Francisco, CA, US: South Bay OpenStack Meetup, Beginner track - http://www.meetup.com/openstack/events/218900735/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rnishtal at cisco.com Mon Dec 8 22:29:36 2014 From: rnishtal at cisco.com (Ramakrishna Nishtala (rnishtal)) Date: Mon, 8 Dec 2014 22:29:36 +0000 Subject: [Rdo-list] Juno kvm/qemu fail to start after hypervisor reboot Message-ID: <828C71EED0ECDB4E9DBFAB28FDE743A01656A164@xmb-aln-x14.cisco.com> Hi Has anyone encountered this problem? The VM comes up fine for the first time. But when hypervisor is rebooted, it never comes back online. Issuing a virsh start goes through series of checks but shuts off again. Only thing found from instance libvirtd/qemu log file is 2014-12-08 22:12:39.172+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name instance-0000001a -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 8192 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 5433c5b8-c66c-4035-9208-9f5d24764cce -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2014.2-2.el7.centos,serial=fdd0e554-0105-4e8b-97e8-ec1fb11da4a9,uuid=5433c5b8-c66c-4035-9208-9f5d24764cce -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000001a.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -no-kvm-pit-reinjection -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/dev/disk/by-path/ip-172.22.164.141:3260-iscsi-iqn.2010-10.org.openstack:volume-5613e627-c15a-4da9-8ffb-aac5e8660321-lun-0,if=none,id=drive-virtio-disk0,format=raw,serial=5613e627-c15a-4da9-8ffb-aac5e8660321,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:16:5e:44,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/home/ceph/nova/instances/5433c5b8-c66c-4035-9208-9f5d24764cce/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 172.22.164.139:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Warning: option deprecated, use lost_tick_policy property of kvm-pit instead. char device redirected to /dev/pts/2 (label charserial1) qemu: terminating on signal 15 from pid 4067 2014-12-08 22:13:01.506+0000: shutting down The process 4067 is libvirtd itself. Tried with both qemu/kvm, local or volume, rhel guest and server images, the problem remains. Any clues appreciated. Regards, Rama -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenhua2000 at gmail.com Tue Dec 9 02:44:49 2014 From: zhenhua2000 at gmail.com (Zhang Zhenhua) Date: Tue, 9 Dec 2014 10:44:49 +0800 Subject: [Rdo-list] packstack complete too fast and does not install OpenStack actually Message-ID: Hi all, I am using the latest packstack to install the OpenStack Icehouse on a fresh installed CentOS 6.5. The packstack complete the whole installation too fast. The setup log actually tells me that it doesn't complete the installation actually. Any ideas? It's an all-in-one installation. CONFIG_COMPUTE_HOSTS=192.168.5.51 CONFIG_NETWORK_HOSTS=192.168.5.51 cat /var/tmp/packstack/20141209-103957-LgQwiz/openstack-setup.log ...... 2014-12-09 10:35:22::INFO::shell::81::root:: [192.168.5.51] Executing script: ip addr show dev em1 || ( echo Device em1 does not exist && exit 1 ) 2014-12-09 10:35:22::INFO::shell::81::root:: [192.168.5.51] Executing script: ip link show up | grep "em1" 2014-12-09 10:35:22::INFO::shell::81::root:: [192.168.5.51] Executing script: echo $HOME 2014-12-09 10:35:22::INFO::shell::81::root:: [localhost] Executing script: rpm -q --requires openstack-puppet-modules | egrep -v "^(rpmlib|\/|perl)" 2014-12-09 10:35:22::INFO::shell::81::root:: [localhost] Executing script: -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenhua2000 at gmail.com Tue Dec 9 03:19:03 2014 From: zhenhua2000 at gmail.com (Zhang Zhenhua) Date: Tue, 9 Dec 2014 11:19:03 +0800 Subject: [Rdo-list] packstack complete too fast and does not install OpenStack actually In-Reply-To: References: Message-ID: please ignore my previous mail. I found the root cause is that I wrongly mark my local server into the EXCLUDE_SERVERS list. Apologize for my mistake. 2014-12-09 10:44 GMT+08:00 Zhang Zhenhua : > Hi all, > > I am using the latest packstack to install the OpenStack Icehouse on a > fresh installed CentOS 6.5. > > The packstack complete the whole installation too fast. The setup log > actually tells me that it doesn't complete the installation actually. Any > ideas? > > It's an all-in-one installation. > > CONFIG_COMPUTE_HOSTS=192.168.5.51 > > CONFIG_NETWORK_HOSTS=192.168.5.51 > > > cat /var/tmp/packstack/20141209-103957-LgQwiz/openstack-setup.log > ...... > > 2014-12-09 10:35:22::INFO::shell::81::root:: [192.168.5.51] Executing > script: > > ip addr show dev em1 || ( echo Device em1 does not exist && exit 1 ) > > 2014-12-09 10:35:22::INFO::shell::81::root:: [192.168.5.51] Executing > script: > > ip link show up | grep "em1" > > 2014-12-09 10:35:22::INFO::shell::81::root:: [192.168.5.51] Executing > script: > > echo $HOME > > 2014-12-09 10:35:22::INFO::shell::81::root:: [localhost] Executing script: > > rpm -q --requires openstack-puppet-modules | egrep -v "^(rpmlib|\/|perl)" > > 2014-12-09 10:35:22::INFO::shell::81::root:: [localhost] Executing script: > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdo-info at redhat.com Tue Dec 9 20:16:10 2014 From: rdo-info at redhat.com (RDO Forum) Date: Tue, 9 Dec 2014 20:16:10 +0000 Subject: [Rdo-list] [RDO] Fedora 21 Cloud Images Available Today Message-ID: <0000014a30b1db7b-48be7f8e-a7dc-4d92-a020-d56dba7e01b9-000000@email.amazonses.com> rbowen started a discussion. Fedora 21 Cloud Images Available Today --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/995/fedora-21-cloud-images-available-today Have a great day! From meil at rc.inesa.com Thu Dec 11 03:32:46 2014 From: meil at rc.inesa.com (=?gb2312?B?w7fA2g==?=) Date: Thu, 11 Dec 2014 11:32:46 +0800 Subject: [Rdo-list] change ethernet name in answer file Message-ID: <001c01d014f3$21ce48c0$656ada40$@inesa.com> Hi experts, After using RDO to deploy all in one openstack(Juno), I notice I use the wrong Ethernet as the public interface and private interface, so can I change the ethernet name in answer file and reinstall the openstack? Does it will take effect? -Best Regards Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From meil at rc.inesa.com Thu Dec 11 07:30:10 2014 From: meil at rc.inesa.com (Mei Lei) Date: Thu, 11 Dec 2014 15:30:10 +0800 Subject: [Rdo-list] =?gb2312?b?tPC4tDogIGNoYW5nZSBldGhlcm5ldCBuYW1lIGlu?= =?gb2312?b?IGFuc3dlciBmaWxl?= In-Reply-To: <001c01d014f3$21ce48c0$656ada40$@inesa.com> References: <001c01d014f3$21ce48c0$656ada40$@inesa.com> Message-ID: <006c01d01514$4b883010$e2989030$@inesa.com> After read the code of puppet, I notice it will rewrite the Ethernet configuration, so I test it by myself, it works after re-run the packstack. Thanks! -Best Regards, Andy ???: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] ? ? ?? ????: 11 December 2014 11:33 ???: rdo-list at redhat.com ??: [Rdo-list] change ethernet name in answer file Hi experts, After using RDO to deploy all in one openstack(Juno), I notice I use the wrong Ethernet as the public interface and private interface, so can I change the ethernet name in answer file and reinstall the openstack? Does it will take effect? -Best Regards Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at brianlee.org Thu Dec 11 13:29:38 2014 From: brian at brianlee.org (brian lee) Date: Thu, 11 Dec 2014 07:29:38 -0600 Subject: [Rdo-list] Neutron Problems Message-ID: Hi Everyone, I am having problems with my neutron setup and hopefully with your help I can get it figured out. I have a 4 node blade setup with two nics each, all of them running CentOS 6.6. One host is foreman, the other three are for openstack. Since foreman is managing the blades, they have their IP addresses assigned via DHCP to eth0. After the install I noticed that the eth0 device was not attaching to the br-ex device. After lots of work, I was able to get that connected using these configs: ifcfg-br-ex: DEVICE="eth0" #BOOTPROTO="dhcp" BOOTPROTO="none" DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex #DHCP_HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" #HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" HWADDR="E4:1F:13:78:D8:90" #IPV6INIT="yes" MTU="1500" #NM_CONTROLLED="yes" NM_CONTROLLED="no" ONBOOT="yes" #TYPE="Ethernet" UUID="ebd620ad-7e48-4a08-9875-c596b4c4648c" VLAN=yes ifcfg-eth0: DEVICE="eth0" #BOOTPROTO="dhcp" BOOTPROTO="none" DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-ex #DHCP_HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" #HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" HWADDR="E4:1F:13:78:D8:90" #IPV6INIT="yes" MTU="1500" #NM_CONTROLLED="yes" NM_CONTROLLED="no" ONBOOT="yes" #TYPE="Ethernet" UUID="ebd620ad-7e48-4a08-9875-c596b4c4648c" VLAN=yes I can see eth0 attached to the br-ex, along with the external router port in ovs-vsctl show: Bridge br-ex Port br-ex Interface br-ex type: internal Port "qg-161de698-16" Interface "qg-161de698-16" type: internal Port "eth0" Interface "eth0" Now my problem, I can not get the guest VM to talk out. It can ping to the router port IP (10.30.1.10) but nothing past it. And from my network I can ping to the gateway of that network (10.30.1.1). What else should I check? I feel this is a problem with openvswitch, but I just dont know what to look at. Thanks for any help you can offer. --Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at brianlee.org Thu Dec 11 15:15:24 2014 From: brian at brianlee.org (brian lee) Date: Thu, 11 Dec 2014 09:15:24 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: Message-ID: It looks like my cute and paste did not work right. My br-ex device looks like this: DEVICE=br-ex OVSBOOTPROTO="dhcp" OVSDHCPINTERFACES="eth0" ONBOOT=yes NM_CONTROLLED=no TYPE=OVSBridge DEVICETYPE=ovs DEVICE=br-ex OVSBOOTPROTO="dhcp" OVSDHCPINTERFACES="eth0" ONBOOT=yes NM_CONTROLLED=no TYPE=OVSBridge DEVICETYPE=ovs Sorry about the confusion. --Brian On Thu, Dec 11, 2014 at 7:29 AM, brian lee wrote: > > Hi Everyone, > > I am having problems with my neutron setup and hopefully with your help I > can get it figured out. > I have a 4 node blade setup with two nics each, all of them running CentOS > 6.6. One host is foreman, the other three are for openstack. Since foreman > is managing the blades, they have their IP addresses assigned via DHCP to > eth0. > After the install I noticed that the eth0 device was not attaching to the > br-ex device. After lots of work, I was able to get that connected using > these configs: > > ifcfg-br-ex: > DEVICE="eth0" > #BOOTPROTO="dhcp" > BOOTPROTO="none" > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > #DHCP_HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" > #HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" > HWADDR="E4:1F:13:78:D8:90" > #IPV6INIT="yes" > MTU="1500" > #NM_CONTROLLED="yes" > NM_CONTROLLED="no" > ONBOOT="yes" > #TYPE="Ethernet" > UUID="ebd620ad-7e48-4a08-9875-c596b4c4648c" > VLAN=yes > > ifcfg-eth0: > DEVICE="eth0" > #BOOTPROTO="dhcp" > BOOTPROTO="none" > DEVICETYPE=ovs > TYPE=OVSPort > OVS_BRIDGE=br-ex > #DHCP_HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" > #HOSTNAME="openstack-1.quicksand.bitc.morphotrust.com" > HWADDR="E4:1F:13:78:D8:90" > #IPV6INIT="yes" > MTU="1500" > #NM_CONTROLLED="yes" > NM_CONTROLLED="no" > ONBOOT="yes" > #TYPE="Ethernet" > UUID="ebd620ad-7e48-4a08-9875-c596b4c4648c" > VLAN=yes > > I can see eth0 attached to the br-ex, along with the external router port > in ovs-vsctl show: > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Port "qg-161de698-16" > Interface "qg-161de698-16" > type: internal > Port "eth0" > Interface "eth0" > > Now my problem, I can not get the guest VM to talk out. It can ping to the > router port IP (10.30.1.10) but nothing past it. And from my network I can > ping to the gateway of that network (10.30.1.1). > > What else should I check? I feel this is a problem with openvswitch, but I > just dont know what to look at. > > Thanks for any help you can offer. > > --Brian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Thu Dec 11 16:01:48 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Thu, 11 Dec 2014 17:01:48 +0100 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: Message-ID: <5489BFEC.2000801@laimbock.com> Hi Brian, On 11-12-14 16:15, brian lee wrote: > It looks like my cute and paste did not work right. My br-ex device > looks like this: > > DEVICE=br-ex > OVSBOOTPROTO="dhcp" > OVSDHCPINTERFACES="eth0" > ONBOOT=yes > NM_CONTROLLED=no > TYPE=OVSBridge > DEVICETYPE=ovs > DEVICE=br-ex > OVSBOOTPROTO="dhcp" > OVSDHCPINTERFACES="eth0" > ONBOOT=yes > NM_CONTROLLED=no > TYPE=OVSBridge > DEVICETYPE=ovs > > Sorry about the confusion. I use RDO Juno and here are my interfaces: [root at neutron1-1 network-scripts]# cat ifcfg-br-ex DEVICE=br-ex TYPE=OVSBridge DEVICETYPE=ovs OVSBOOTPROTO=dhcp OVSDHCPINTERFACES=eth1 MACADDR="00:01:02:03:04:05" OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" ONBOOT=yes NM_CONTROLLED=no [root at neutron1-1 network-scripts]# cat ifcfg-eth1 DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes BOOTPROTO=none NM_CONTROLLED=no HTH, Patrick From brian at brianlee.org Thu Dec 11 16:28:04 2014 From: brian at brianlee.org (brian lee) Date: Thu, 11 Dec 2014 10:28:04 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: <5489BFEC.2000801@laimbock.com> References: <5489BFEC.2000801@laimbock.com> Message-ID: Man my copy and paste just is not liking me. Anyways, I saw posting about forcing the mac address every time, but I have not had a problem. My problem is the port does not become active. I included the device settings as a reference. This is the status of the port: +-----------------------+-------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | openstack-1.quicksand.bitc.morphotrust.com | | binding:profile | {} | | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | device_id | 7319781c-6186-4684-ba60-260b5ecee97c | | device_owner | network:router_gateway | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": "10.30.1.10"} | | id | 161de698-1666-4c0d-9248-8de900797301 | | mac_address | fa:16:3e:c9:ff:64 | | name | | | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 | | security_groups | | | status | DOWN | | tenant_id | | +-----------------------+-------------------------------------------------------------------------------------+ I am just not able to get that port up. And since its not up I cant ping/ssh to the VMs. What do I need to do for vlans on my physical switch? --Brian On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock wrote: > > Hi Brian, > > On 11-12-14 16:15, brian lee wrote: > >> It looks like my cute and paste did not work right. My br-ex device >> looks like this: >> >> DEVICE=br-ex >> OVSBOOTPROTO="dhcp" >> OVSDHCPINTERFACES="eth0" >> ONBOOT=yes >> NM_CONTROLLED=no >> TYPE=OVSBridge >> DEVICETYPE=ovs >> DEVICE=br-ex >> OVSBOOTPROTO="dhcp" >> OVSDHCPINTERFACES="eth0" >> ONBOOT=yes >> NM_CONTROLLED=no >> TYPE=OVSBridge >> DEVICETYPE=ovs >> >> Sorry about the confusion. >> > > I use RDO Juno and here are my interfaces: > > [root at neutron1-1 network-scripts]# cat ifcfg-br-ex > DEVICE=br-ex > TYPE=OVSBridge > DEVICETYPE=ovs > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES=eth1 > MACADDR="00:01:02:03:04:05" > OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" > ONBOOT=yes > NM_CONTROLLED=no > > > [root at neutron1-1 network-scripts]# cat ifcfg-eth1 > DEVICE=eth1 > TYPE=OVSPort > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > ONBOOT=yes > BOOTPROTO=none > NM_CONTROLLED=no > > HTH, > Patrick > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Dec 11 21:36:36 2014 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 11 Dec 2014 16:36:36 -0500 Subject: [Rdo-list] [RFC] RDO packaging guide DRAFT In-Reply-To: References: Message-ID: <548A0E64.4070906@redhat.com> On 12/08/2014 11:13 AM, Alan Pevec wrote: > Hi all, > > When Red Hat launched the RDO community in April 2013, we chose to > focus our energies on making an OpenStack distribution available and > encouraging a self-supporting community of users around the > distribution, related tools, and supporting documentation. With this > community now established, it is clear we need to prioritize opening > up the RDO development process. Or, to put it another way, it is time > to begin opening up the technical governance of RDO. > > We want this process to be discussed and fleshed out in more detail > publicly on rdo-list at redhat.com and as a first step we have published > a draft of RDO Packaging guide at > http://redhat-openstack.github.io/openstack-packaging-doc/rdo-packaging.html > > Please review and provide feedback, or even send pull requests. The updated RDO packaging documentation is now available on the RDO website, at https://openstack.redhat.com/packaging/rdo-packaging.html -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From dmitry at athabascau.ca Thu Dec 11 22:34:36 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Thu, 11 Dec 2014 15:34:36 -0700 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available Message-ID: <548A1BFC.4020102@athabascau.ca> Hi, I've got my RDO OpenStack IceHouse assembled and working, but only to a point of assigning IPs, I'm getting errors in logs: dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available (see attached logs for more traceback) I have located the post: https://ask.openstack.org/en/question/52570/update-to-dnsmasq-doesnt-solve-bug-977555/ (along with RHBZ case #977555), however both sources suggest that "most recent" dnsmasq should fix the issue. My current dnsmasq set is: # rpm -qa | grep dnsmasq dnsmasq-utils-2.48-14.el6.x86_64 dnsmasq-2.48-14.el6.x86_64 both comming from RH repos (and not RDO repo). Also started up cirros image and by the looks of it it did not received DHCP-managed IP (in fact it didn't receive anything): ### ifconfig -a eth0 Link encap:Ethernet HWaddr FA:16:3E:BA:EC:D1 inet6 addr: fe80::f816:3eff:feba:ecd1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1106 (1.0 KiB) TX bytes:1114 (1.0 KiB) Where should I look for clues for what's *really* wrong here - I can't believe it won't be a major issue for everyone else out there. -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- 2014-12-11 14:59:16.491 20564 ERROR neutron.agent.dhcp_agent [-] Unable to enable dhcp for df9dc8c6-447f-42a9-aece-d7743e204ea3. 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent Traceback (most recent call last): 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent File "/usr/lib/python2.6/site-packages/neutron/agent/dhcp_agent.py", line 129, in call_driver 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent getattr(driver, action)(**action_kwargs) 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/dhcp.py", line 180, in enable 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent self.spawn_process() 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/dhcp.py", line 382, in spawn_process 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent ip_wrapper.netns.execute(cmd, addl_env=env) 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 468, in execute 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes) 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent File "/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py", line 82, in execute 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent raise RuntimeError(m) 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent RuntimeError: 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qdhcp-df9dc8c6-447f-42a9-aece-d7743e204ea3', 'env', 'NEUTRON_NETWORK_ID=df9dc8c6-447f-42a9-aece-d7743e204ea3', 'dnsmasq', '--no-hosts', '--no-resolv', '--strict-order', '--bind-interfaces', '--interface=tapdf7a6130-3c', '--except-interface=lo', '--pid-file=/var/lib/neutron/dhcp/df9dc8c6-447f-42a9-aece-d7743e204ea3/pid', '--dhcp-hostsfile=/var/lib/neutron/dhcp/df9dc8c6-447f-42a9-aece-d7743e204ea3/host', '--addn-hosts=/var/lib/neutron/dhcp/df9dc8c6-447f-42a9-aece-d7743e204ea3/addn_hosts', '--dhcp-optsfile=/var/lib/neutron/dhcp/df9dc8c6-447f-42a9-aece-d7743e204ea3/opts', '--leasefile-ro', '--dhcp-range=tag0,10.2.0.0,static,86400s', '--dhcp-lease-max=256', '--conf-file=/etc/neutron/dnsmasq-neutron.conf', '--domain=openstacklocal'] 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent Exit code: 2 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent Stdout: '' 2014-12-11 14:59:16.491 20564 TRACE neutron.agent.dhcp_agent Stderr: '\ndnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available\n' -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From dmitry at athabascau.ca Fri Dec 12 00:58:56 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Thu, 11 Dec 2014 17:58:56 -0700 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available In-Reply-To: <548A1BFC.4020102@athabascau.ca> References: <548A1BFC.4020102@athabascau.ca> Message-ID: <548A3DD0.7040509@athabascau.ca> On 12/11/2014 03:34 PM, Dmitry Makovey wrote: > > Hi, I've got my RDO OpenStack IceHouse assembled and working, but only > to a point of assigning IPs, I'm getting errors in logs: > > dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not > available > > (see attached logs for more traceback) > > I have located the post: > https://ask.openstack.org/en/question/52570/update-to-dnsmasq-doesnt-solve-bug-977555/ > (along with RHBZ case #977555), however both sources suggest that "most > recent" dnsmasq should fix the issue. > > My current dnsmasq set is: > > # rpm -qa | grep dnsmasq > dnsmasq-utils-2.48-14.el6.x86_64 > dnsmasq-2.48-14.el6.x86_64 after downgrading packages to 2.48-13 and restarting services looks like things are back under control... interestingly enough nobody mentioned that "conntrac-tools" needs to be installed... ;) -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From brian at brianlee.org Fri Dec 12 01:00:17 2014 From: brian at brianlee.org (brian lee) Date: Thu, 11 Dec 2014 19:00:17 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> Message-ID: I have been working on this for days now and I just can not figure it out. Attached is a bit from horizon where it is showing both interfaces on the router as down. How can I find out what is preventing them from starting? ? --Brian On Thu, Dec 11, 2014 at 10:28 AM, brian lee wrote: > > Man my copy and paste just is not liking me. Anyways, I saw posting about > forcing the mac address every time, but I have not had a problem. > My problem is the port does not become active. I included the device > settings as a reference. This is the status of the port: > > > +-----------------------+-------------------------------------------------------------------------------------+ > | Field | Value > | > > +-----------------------+-------------------------------------------------------------------------------------+ > | admin_state_up | True > | > | allowed_address_pairs | > | > | binding:host_id | openstack-1.quicksand.bitc.morphotrust.com > | > | binding:profile | {} > | > | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} > | > | binding:vif_type | ovs > | > | binding:vnic_type | normal > | > | device_id | 7319781c-6186-4684-ba60-260b5ecee97c > | > | device_owner | network:router_gateway > | > | extra_dhcp_opts | > | > | fixed_ips | {"subnet_id": > "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": "10.30.1.10"} | > | id | 161de698-1666-4c0d-9248-8de900797301 > | > | mac_address | fa:16:3e:c9:ff:64 > | > | name | > | > | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 > | > | security_groups | > | > | status | DOWN > | > | tenant_id | > | > > +-----------------------+-------------------------------------------------------------------------------------+ > > I am just not able to get that port up. And since its not up I cant > ping/ssh to the VMs. What do I need to do for vlans on my physical switch? > > --Brian > > On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock > wrote: >> >> Hi Brian, >> >> On 11-12-14 16:15, brian lee wrote: >> >>> It looks like my cute and paste did not work right. My br-ex device >>> looks like this: >>> >>> DEVICE=br-ex >>> OVSBOOTPROTO="dhcp" >>> OVSDHCPINTERFACES="eth0" >>> ONBOOT=yes >>> NM_CONTROLLED=no >>> TYPE=OVSBridge >>> DEVICETYPE=ovs >>> DEVICE=br-ex >>> OVSBOOTPROTO="dhcp" >>> OVSDHCPINTERFACES="eth0" >>> ONBOOT=yes >>> NM_CONTROLLED=no >>> TYPE=OVSBridge >>> DEVICETYPE=ovs >>> >>> Sorry about the confusion. >>> >> >> I use RDO Juno and here are my interfaces: >> >> [root at neutron1-1 network-scripts]# cat ifcfg-br-ex >> DEVICE=br-ex >> TYPE=OVSBridge >> DEVICETYPE=ovs >> OVSBOOTPROTO=dhcp >> OVSDHCPINTERFACES=eth1 >> MACADDR="00:01:02:03:04:05" >> OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" >> ONBOOT=yes >> NM_CONTROLLED=no >> >> >> [root at neutron1-1 network-scripts]# cat ifcfg-eth1 >> DEVICE=eth1 >> TYPE=OVSPort >> DEVICETYPE=ovs >> OVS_BRIDGE=br-ex >> ONBOOT=yes >> BOOTPROTO=none >> NM_CONTROLLED=no >> >> HTH, >> Patrick >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Neutron_issue.PNG Type: image/png Size: 19093 bytes Desc: not available URL: From patrick at laimbock.com Fri Dec 12 02:20:08 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Fri, 12 Dec 2014 03:20:08 +0100 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> Message-ID: <548A50D8.8000405@laimbock.com> Hi Brian, Maybe there's a really simple solution but I don't have enough info to tell. So here's a "slightly" longer suggestion. For VLAN support on the *physical* network your switch will need to support 802.1Q. When you say VLANs what do you mean? If you want to use VLANs for tenant separation (so in the overlay network, not the physical network) then Open vSwitch will take of that and AFAIK (I don't use VLANs) you don't need to enable VLANs on your ifcfg devices. Unless your physical network requires VLANs off course. The interfaces you pasted had VLAN=yes but not a VLAN designation (like DEVICE=eth0.10 where .10 indicates VLAN 10) and although configured for a static setting (DHCP commented out) there was no IP address defined. So maybe take a step back. Delete all the networks and routers (might need to do that from the CLI if things are stuck), on your Neutron node backup & delete ifcfg-br-ex and restore a working ifcfg-eth0, then restart the network and restart the Open vSwitch service on your neutron node so it detects previous stuff is gone (check with ovs-vsctl show), then start with defining the ifcfg-br-ex device and make sure your network is OK first (check with ip address show and restart the network and check again). Then add ethX to br-ex: # ovs-vsctl add-port br-ex ethX ; service network restart Make sure you have access to a local console so you don't get locked out if your network fails to restart. Then restart the Open vSwitch service. Then move on to create the tenant stuff you'll need. I don't know how you installed RDO. If you used Packstack and want VLAN tenant separation then you have already provided VLAN info and you should use that when setting things up with something like: As regular user: the router the private network the private subnet add private subnet to router As admin: the public network (to be used for example to access the Internet) the public subnet add public gateway on the router As regular user: Create some floating IPs Start an instance of for example the Cirros image Assign a floating IP address Once booted log into it via the console, ping local & remote addresses. Hopefully shout "YES!" :) FWIW: If you want VLANs for tenant separation then VXLAN and GRE are much easier: Read Rhyz's explanation (5th comment) why: https://openstack.redhat.com/forum/discussion/626/help-with-neutron-networking/p1 HTH, Patrick On 12-12-14 02:00, brian lee wrote: > I have been working on this for days now and I just can not figure it > out. Attached is a bit from horizon where it is showing both interfaces > on the router as down. How can I find out what is preventing them from > starting? > > ? > > --Brian > > On Thu, Dec 11, 2014 at 10:28 AM, brian lee > wrote: > > Man my copy and paste just is not liking me. Anyways, I saw posting > about forcing the mac address every time, but I have not had a problem. > My problem is the port does not become active. I included the device > settings as a reference. This is the status of the port: > > +-----------------------+-------------------------------------------------------------------------------------+ > | Field | Value > | > +-----------------------+-------------------------------------------------------------------------------------+ > | admin_state_up | True > | > | allowed_address_pairs | > | > | binding:host_id | openstack-1.quicksand.bitc.morphotrust.com > > | > | binding:profile | {} > | > | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": > true} | > | binding:vif_type | ovs > | > | binding:vnic_type | normal > | > | device_id | 7319781c-6186-4684-ba60-260b5ecee97c > | > | device_owner | network:router_gateway > | > | extra_dhcp_opts | > | > | fixed_ips | {"subnet_id": > "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": "10.30.1.10"} | > | id | 161de698-1666-4c0d-9248-8de900797301 > | > | mac_address | fa:16:3e:c9:ff:64 > | > | name | > | > | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 > | > | security_groups | > | > | status | DOWN > | > | tenant_id | > | > +-----------------------+-------------------------------------------------------------------------------------+ > > I am just not able to get that port up. And since its not up I cant > ping/ssh to the VMs. What do I need to do for vlans on my physical > switch? > > --Brian > > On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock > > wrote: > > Hi Brian, > > On 11-12-14 16:15, brian lee wrote: > > It looks like my cute and paste did not work right. My br-ex > device > looks like this: > > DEVICE=br-ex > OVSBOOTPROTO="dhcp" > OVSDHCPINTERFACES="eth0" > ONBOOT=yes > NM_CONTROLLED=no > TYPE=OVSBridge > DEVICETYPE=ovs > DEVICE=br-ex > OVSBOOTPROTO="dhcp" > OVSDHCPINTERFACES="eth0" > ONBOOT=yes > NM_CONTROLLED=no > TYPE=OVSBridge > DEVICETYPE=ovs > > Sorry about the confusion. > > > I use RDO Juno and here are my interfaces: > > [root at neutron1-1 network-scripts]# cat ifcfg-br-ex > DEVICE=br-ex > TYPE=OVSBridge > DEVICETYPE=ovs > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES=eth1 > MACADDR="00:01:02:03:04:05" > OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" > ONBOOT=yes > NM_CONTROLLED=no > > > [root at neutron1-1 network-scripts]# cat ifcfg-eth1 > DEVICE=eth1 > TYPE=OVSPort > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > ONBOOT=yes > BOOTPROTO=none > NM_CONTROLLED=no > > HTH, > Patrick > > > _________________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/__mailman/listinfo/rdo-list > > From brian at brianlee.org Fri Dec 12 03:07:45 2014 From: brian at brianlee.org (brian lee) Date: Thu, 11 Dec 2014 21:07:45 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: <548A50D8.8000405@laimbock.com> References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: Hi Patrick, Thanks for the info, it is slowly coming together for me, I hope. I do have a few more question and I hope it will clear up more. First let me describe my environment more. I am using foreman to manage the physical hosts, and once openstack is running it will manage the VMs as well. So that is why I have a DHCP address for the host, its a static lease from foreman. My physical environment is in a blade center that has two switches in it. One switch is for eth0 and the other is for eth1. For the controller host (Everything but nova compute) the switch is configured for trunked vlan 111 (Management) and 110 (tenets) for both eth0 and eth1. For the compute nodes, the switches are configured for vlan 111 only. I am thinking on my controller host I need to configure the eth0.110 device, give it a static IP and connect it to the br-ex, does that sound right? I do also have some confusion about vxlan and how it is used. Is that only in the "overlay" network? From what I understand it can have tens of thousands of vlans, which the physical switches can not support. How does the OS/physical network handle that? Do you have to use a non-admin project to create the private network? Thanks again for the feedback, I feel I am getting close to resolving this. --Brian On Thu, Dec 11, 2014 at 8:20 PM, Patrick Laimbock wrote: > > Hi Brian, > > Maybe there's a really simple solution but I don't have enough info to > tell. So here's a "slightly" longer suggestion. > > For VLAN support on the *physical* network your switch will need to > support 802.1Q. When you say VLANs what do you mean? If you want to use > VLANs for tenant separation (so in the overlay network, not the physical > network) then Open vSwitch will take of that and AFAIK (I don't use VLANs) > you don't need to enable VLANs on your ifcfg devices. Unless your physical > network requires VLANs off course. > > The interfaces you pasted had VLAN=yes but not a VLAN designation (like > DEVICE=eth0.10 where .10 indicates VLAN 10) and although configured for a > static setting (DHCP commented out) there was no IP address defined. > > So maybe take a step back. Delete all the networks and routers (might need > to do that from the CLI if things are stuck), on your Neutron node backup & > delete ifcfg-br-ex and restore a working ifcfg-eth0, then restart the > network and restart the Open vSwitch service on your neutron node so it > detects previous stuff is gone (check with ovs-vsctl show), then start with > defining the ifcfg-br-ex device and make sure your network is OK first > (check with ip address show and restart the network and check again). Then > add ethX to br-ex: > # ovs-vsctl add-port br-ex ethX ; service network restart > Make sure you have access to a local console so you don't get locked out > if your network fails to restart. Then restart the Open vSwitch service. > > Then move on to create the tenant stuff you'll need. I don't know how you > installed RDO. If you used Packstack and want VLAN tenant separation then > you have already provided VLAN info and you should use that when setting > things up with something like: > > As regular user: > the router > the private network > the private subnet > add private subnet to router > > As admin: > the public network (to be used for example to access the Internet) > the public subnet > add public gateway on the router > > As regular user: > Create some floating IPs > Start an instance of for example the Cirros image > Assign a floating IP address > Once booted log into it via the console, ping local & remote addresses. > Hopefully shout "YES!" :) > > FWIW: If you want VLANs for tenant separation then VXLAN and GRE are much > easier: Read Rhyz's explanation (5th comment) why: > https://openstack.redhat.com/forum/discussion/626/help- > with-neutron-networking/p1 > > HTH, > Patrick > > On 12-12-14 02:00, brian lee wrote: > >> I have been working on this for days now and I just can not figure it >> out. Attached is a bit from horizon where it is showing both interfaces >> on the router as down. How can I find out what is preventing them from >> starting? >> >> ? >> >> --Brian >> >> On Thu, Dec 11, 2014 at 10:28 AM, brian lee > > wrote: >> >> Man my copy and paste just is not liking me. Anyways, I saw posting >> about forcing the mac address every time, but I have not had a >> problem. >> My problem is the port does not become active. I included the device >> settings as a reference. This is the status of the port: >> >> +-----------------------+----------------------------------- >> --------------------------------------------------+ >> | Field | Value >> | >> +-----------------------+----------------------------------- >> --------------------------------------------------+ >> | admin_state_up | True >> | >> | allowed_address_pairs | >> | >> | binding:host_id | openstack-1.quicksand.bitc.morphotrust.com >> >> | >> | binding:profile | {} >> | >> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >> true} | >> | binding:vif_type | ovs >> | >> | binding:vnic_type | normal >> | >> | device_id | 7319781c-6186-4684-ba60-260b5ecee97c >> | >> | device_owner | network:router_gateway >> | >> | extra_dhcp_opts | >> | >> | fixed_ips | {"subnet_id": >> "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": "10.30.1.10"} | >> | id | 161de698-1666-4c0d-9248-8de900797301 >> | >> | mac_address | fa:16:3e:c9:ff:64 >> | >> | name | >> | >> | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 >> | >> | security_groups | >> | >> | status | DOWN >> | >> | tenant_id | >> | >> +-----------------------+----------------------------------- >> --------------------------------------------------+ >> >> I am just not able to get that port up. And since its not up I cant >> ping/ssh to the VMs. What do I need to do for vlans on my physical >> switch? >> >> --Brian >> >> On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock >> > wrote: >> >> Hi Brian, >> >> On 11-12-14 16:15, brian lee wrote: >> >> It looks like my cute and paste did not work right. My br-ex >> device >> looks like this: >> >> DEVICE=br-ex >> OVSBOOTPROTO="dhcp" >> OVSDHCPINTERFACES="eth0" >> ONBOOT=yes >> NM_CONTROLLED=no >> TYPE=OVSBridge >> DEVICETYPE=ovs >> DEVICE=br-ex >> OVSBOOTPROTO="dhcp" >> OVSDHCPINTERFACES="eth0" >> ONBOOT=yes >> NM_CONTROLLED=no >> TYPE=OVSBridge >> DEVICETYPE=ovs >> >> Sorry about the confusion. >> >> >> I use RDO Juno and here are my interfaces: >> >> [root at neutron1-1 network-scripts]# cat ifcfg-br-ex >> DEVICE=br-ex >> TYPE=OVSBridge >> DEVICETYPE=ovs >> OVSBOOTPROTO=dhcp >> OVSDHCPINTERFACES=eth1 >> MACADDR="00:01:02:03:04:05" >> OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" >> ONBOOT=yes >> NM_CONTROLLED=no >> >> >> [root at neutron1-1 network-scripts]# cat ifcfg-eth1 >> DEVICE=eth1 >> TYPE=OVSPort >> DEVICETYPE=ovs >> OVS_BRIDGE=br-ex >> ONBOOT=yes >> BOOTPROTO=none >> NM_CONTROLLED=no >> >> HTH, >> Patrick >> >> >> _________________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/__mailman/listinfo/rdo-list >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at brianlee.org Fri Dec 12 03:24:37 2014 From: brian at brianlee.org (brian lee) Date: Thu, 11 Dec 2014 21:24:37 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: Another follow up: What needs to be configured on the compute nodes? --Brian On Thu, Dec 11, 2014 at 9:07 PM, brian lee wrote: > > Hi Patrick, > > Thanks for the info, it is slowly coming together for me, I hope. I do > have a few more question and I hope it will clear up more. First let me > describe my environment more. I am using foreman to manage the physical > hosts, and once openstack is running it will manage the VMs as well. So > that is why I have a DHCP address for the host, its a static lease from > foreman. > > My physical environment is in a blade center that has two switches in it. > One switch is for eth0 and the other is for eth1. For the controller host > (Everything but nova compute) the switch is configured for trunked vlan 111 > (Management) and 110 (tenets) for both eth0 and eth1. For the compute > nodes, the switches are configured for vlan 111 only. > > I am thinking on my controller host I need to configure the eth0.110 > device, give it a static IP and connect it to the br-ex, does that sound > right? > > I do also have some confusion about vxlan and how it is used. Is that only > in the "overlay" network? From what I understand it can have tens of > thousands of vlans, which the physical switches can not support. How does > the OS/physical network handle that? > > Do you have to use a non-admin project to create the private network? > > Thanks again for the feedback, I feel I am getting close to resolving this. > > --Brian > > On Thu, Dec 11, 2014 at 8:20 PM, Patrick Laimbock > wrote: >> >> Hi Brian, >> >> Maybe there's a really simple solution but I don't have enough info to >> tell. So here's a "slightly" longer suggestion. >> >> For VLAN support on the *physical* network your switch will need to >> support 802.1Q. When you say VLANs what do you mean? If you want to use >> VLANs for tenant separation (so in the overlay network, not the physical >> network) then Open vSwitch will take of that and AFAIK (I don't use VLANs) >> you don't need to enable VLANs on your ifcfg devices. Unless your physical >> network requires VLANs off course. >> >> The interfaces you pasted had VLAN=yes but not a VLAN designation (like >> DEVICE=eth0.10 where .10 indicates VLAN 10) and although configured for a >> static setting (DHCP commented out) there was no IP address defined. >> >> So maybe take a step back. Delete all the networks and routers (might >> need to do that from the CLI if things are stuck), on your Neutron node >> backup & delete ifcfg-br-ex and restore a working ifcfg-eth0, then restart >> the network and restart the Open vSwitch service on your neutron node so it >> detects previous stuff is gone (check with ovs-vsctl show), then start with >> defining the ifcfg-br-ex device and make sure your network is OK first >> (check with ip address show and restart the network and check again). Then >> add ethX to br-ex: >> # ovs-vsctl add-port br-ex ethX ; service network restart >> Make sure you have access to a local console so you don't get locked out >> if your network fails to restart. Then restart the Open vSwitch service. >> >> Then move on to create the tenant stuff you'll need. I don't know how you >> installed RDO. If you used Packstack and want VLAN tenant separation then >> you have already provided VLAN info and you should use that when setting >> things up with something like: >> >> As regular user: >> the router >> the private network >> the private subnet >> add private subnet to router >> >> As admin: >> the public network (to be used for example to access the Internet) >> the public subnet >> add public gateway on the router >> >> As regular user: >> Create some floating IPs >> Start an instance of for example the Cirros image >> Assign a floating IP address >> Once booted log into it via the console, ping local & remote addresses. >> Hopefully shout "YES!" :) >> >> FWIW: If you want VLANs for tenant separation then VXLAN and GRE are much >> easier: Read Rhyz's explanation (5th comment) why: >> https://openstack.redhat.com/forum/discussion/626/help- >> with-neutron-networking/p1 >> >> HTH, >> Patrick >> >> On 12-12-14 02:00, brian lee wrote: >> >>> I have been working on this for days now and I just can not figure it >>> out. Attached is a bit from horizon where it is showing both interfaces >>> on the router as down. How can I find out what is preventing them from >>> starting? >>> >>> ? >>> >>> --Brian >>> >>> On Thu, Dec 11, 2014 at 10:28 AM, brian lee >> > wrote: >>> >>> Man my copy and paste just is not liking me. Anyways, I saw posting >>> about forcing the mac address every time, but I have not had a >>> problem. >>> My problem is the port does not become active. I included the device >>> settings as a reference. This is the status of the port: >>> >>> +-----------------------+----------------------------------- >>> --------------------------------------------------+ >>> | Field | Value >>> | >>> +-----------------------+----------------------------------- >>> --------------------------------------------------+ >>> | admin_state_up | True >>> | >>> | allowed_address_pairs | >>> | >>> | binding:host_id | openstack-1.quicksand.bitc.morphotrust.com >>> >>> | >>> | binding:profile | {} >>> | >>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>> true} | >>> | binding:vif_type | ovs >>> | >>> | binding:vnic_type | normal >>> | >>> | device_id | 7319781c-6186-4684-ba60-260b5ecee97c >>> | >>> | device_owner | network:router_gateway >>> | >>> | extra_dhcp_opts | >>> | >>> | fixed_ips | {"subnet_id": >>> "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": "10.30.1.10"} >>> | >>> | id | 161de698-1666-4c0d-9248-8de900797301 >>> | >>> | mac_address | fa:16:3e:c9:ff:64 >>> | >>> | name | >>> | >>> | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 >>> | >>> | security_groups | >>> | >>> | status | DOWN >>> | >>> | tenant_id | >>> | >>> +-----------------------+----------------------------------- >>> --------------------------------------------------+ >>> >>> I am just not able to get that port up. And since its not up I cant >>> ping/ssh to the VMs. What do I need to do for vlans on my physical >>> switch? >>> >>> --Brian >>> >>> On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock >>> > wrote: >>> >>> Hi Brian, >>> >>> On 11-12-14 16:15, brian lee wrote: >>> >>> It looks like my cute and paste did not work right. My br-ex >>> device >>> looks like this: >>> >>> DEVICE=br-ex >>> OVSBOOTPROTO="dhcp" >>> OVSDHCPINTERFACES="eth0" >>> ONBOOT=yes >>> NM_CONTROLLED=no >>> TYPE=OVSBridge >>> DEVICETYPE=ovs >>> DEVICE=br-ex >>> OVSBOOTPROTO="dhcp" >>> OVSDHCPINTERFACES="eth0" >>> ONBOOT=yes >>> NM_CONTROLLED=no >>> TYPE=OVSBridge >>> DEVICETYPE=ovs >>> >>> Sorry about the confusion. >>> >>> >>> I use RDO Juno and here are my interfaces: >>> >>> [root at neutron1-1 network-scripts]# cat ifcfg-br-ex >>> DEVICE=br-ex >>> TYPE=OVSBridge >>> DEVICETYPE=ovs >>> OVSBOOTPROTO=dhcp >>> OVSDHCPINTERFACES=eth1 >>> MACADDR="00:01:02:03:04:05" >>> OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" >>> ONBOOT=yes >>> NM_CONTROLLED=no >>> >>> >>> [root at neutron1-1 network-scripts]# cat ifcfg-eth1 >>> DEVICE=eth1 >>> TYPE=OVSPort >>> DEVICETYPE=ovs >>> OVS_BRIDGE=br-ex >>> ONBOOT=yes >>> BOOTPROTO=none >>> NM_CONTROLLED=no >>> >>> HTH, >>> Patrick >>> >>> >>> _________________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/__mailman/listinfo/rdo-list >>> >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchamoul at redhat.com Fri Dec 12 10:13:50 2014 From: gchamoul at redhat.com (=?iso-8859-1?Q?Ga=EBl?= Chamoulaud) Date: Fri, 12 Dec 2014 11:13:50 +0100 Subject: [Rdo-list] [RFC] RDO packaging guide DRAFT In-Reply-To: <548A0E64.4070906@redhat.com> References: <548A0E64.4070906@redhat.com> Message-ID: <20141212101350.GA26201@strider.cdg.redhat.com> On 11/Dec/2014 @ 16:36, Rich Bowen wrote: > > > On 12/08/2014 11:13 AM, Alan Pevec wrote: > >Hi all, > > > >When Red Hat launched the RDO community in April 2013, we chose to > >focus our energies on making an OpenStack distribution available and > >encouraging a self-supporting community of users around the > >distribution, related tools, and supporting documentation. With this > >community now established, it is clear we need to prioritize opening > >up the RDO development process. Or, to put it another way, it is time > >to begin opening up the technical governance of RDO. > > > >We want this process to be discussed and fleshed out in more detail > >publicly on rdo-list at redhat.com and as a first step we have published > >a draft of RDO Packaging guide at > >http://redhat-openstack.github.io/openstack-packaging-doc/rdo-packaging.html > > > >Please review and provide feedback, or even send pull requests. > > > The updated RDO packaging documentation is now available on the RDO website, > at https://openstack.redhat.com/packaging/rdo-packaging.html Note that the entry point is https://openstack.redhat.com/packaging/index.html -- Ga?l Chamoulaud Openstack Engineering Mail: [gchamoul|gael] at redhat dot com IRC: strider/gchamoul (Red Hat), gchamoul (Freenode) GnuPG Key ID: 7F4B301 C75F 15C2 A7FD EBC3 7B2D CE41 0077 6A4B A7F4 B301 Freedom...Courage...Commitment...Accountability -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ihrachys at redhat.com Fri Dec 12 12:54:59 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 12 Dec 2014 13:54:59 +0100 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available In-Reply-To: <548A3DD0.7040509@athabascau.ca> References: <548A1BFC.4020102@athabascau.ca> <548A3DD0.7040509@athabascau.ca> Message-ID: <548AE5A3.50204@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 12/12/14 01:58, Dmitry Makovey wrote: > On 12/11/2014 03:34 PM, Dmitry Makovey wrote: >> >> Hi, I've got my RDO OpenStack IceHouse assembled and working, but >> only to a point of assigning IPs, I'm getting errors in logs: >> >> dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: >> Protocol not available >> >> (see attached logs for more traceback) >> >> I have located the post: >> https://ask.openstack.org/en/question/52570/update-to-dnsmasq-doesnt-solve-bug-977555/ >> >> (along with RHBZ case #977555), however both sources suggest that "most >> recent" dnsmasq should fix the issue. >> >> My current dnsmasq set is: >> >> # rpm -qa | grep dnsmasq dnsmasq-utils-2.48-14.el6.x86_64 >> dnsmasq-2.48-14.el6.x86_64 > > after downgrading packages to 2.48-13 and restarting services looks > like things are back under control... This sounds like a bug. Can you report it? > > interestingly enough nobody mentioned that "conntrac-tools" needs > to be installed... ;) There was an issue in upstream backporting process when a patch that introduced that new dependency sneaked into upstream Icehouse stable branch. The issue was mentioned in release notes: https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.3#Known_Issues_and_Limitations The patch that introduced that runtime dependency was reverted in upstream and will be released for the next (2014.1.4) Icehouse release. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUiuWjAAoJEC5aWaUY1u57G2oH/1R6JUy/QjxbhNxLuNh3d0us JdHu76oGvuwUXWIHgDP6NE4/tzmpumr+vquJsDm6FfcHr+ocn+5kWOzHCrv6jVYL qnk6hPrz0JKQe7sSR9/iHVMeXkZKge85v/w1Ao+eNT4VbOG6DBGWJS2qsvpPpOv0 WA6eiGYKMgcWRhzvUpkHSKiYlHl26KZQi9w0lKDFh7AKx2ffjrK0qAN+Kkndazfh GpFgnSkuyqQ2FzJNC9ch25YvyYyj9rlqOwms3Ncve9hpY9iu+xsdDTQApIQukrCW v4ybNnRx/hSFk+31xHilStDJmfPMATTeBsf/sL33MlUPWa1T+OX0PHBrF+0SjRs= =wWq+ -----END PGP SIGNATURE----- From brian at brianlee.org Fri Dec 12 16:41:58 2014 From: brian at brianlee.org (brian lee) Date: Fri, 12 Dec 2014 10:41:58 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: PROGRESS! I was able to get my external network interface talking to the network. This is a first for me. So I can ping the router_gateway, even though it says its down. But when I spin up a VM, it is not getting a address for the private network, and associating a float doesn't connect to the VM either. So its getting closer. What does the network config need to look like on the compute nodes? --Brian On Thu, Dec 11, 2014 at 9:24 PM, brian lee wrote: > > Another follow up: What needs to be configured on the compute nodes? > > --Brian > > On Thu, Dec 11, 2014 at 9:07 PM, brian lee wrote: >> >> Hi Patrick, >> >> Thanks for the info, it is slowly coming together for me, I hope. I do >> have a few more question and I hope it will clear up more. First let me >> describe my environment more. I am using foreman to manage the physical >> hosts, and once openstack is running it will manage the VMs as well. So >> that is why I have a DHCP address for the host, its a static lease from >> foreman. >> >> My physical environment is in a blade center that has two switches in it. >> One switch is for eth0 and the other is for eth1. For the controller host >> (Everything but nova compute) the switch is configured for trunked vlan 111 >> (Management) and 110 (tenets) for both eth0 and eth1. For the compute >> nodes, the switches are configured for vlan 111 only. >> >> I am thinking on my controller host I need to configure the eth0.110 >> device, give it a static IP and connect it to the br-ex, does that sound >> right? >> >> I do also have some confusion about vxlan and how it is used. Is that >> only in the "overlay" network? From what I understand it can have tens of >> thousands of vlans, which the physical switches can not support. How does >> the OS/physical network handle that? >> >> Do you have to use a non-admin project to create the private network? >> >> Thanks again for the feedback, I feel I am getting close to resolving >> this. >> >> --Brian >> >> On Thu, Dec 11, 2014 at 8:20 PM, Patrick Laimbock >> wrote: >>> >>> Hi Brian, >>> >>> Maybe there's a really simple solution but I don't have enough info to >>> tell. So here's a "slightly" longer suggestion. >>> >>> For VLAN support on the *physical* network your switch will need to >>> support 802.1Q. When you say VLANs what do you mean? If you want to use >>> VLANs for tenant separation (so in the overlay network, not the physical >>> network) then Open vSwitch will take of that and AFAIK (I don't use VLANs) >>> you don't need to enable VLANs on your ifcfg devices. Unless your physical >>> network requires VLANs off course. >>> >>> The interfaces you pasted had VLAN=yes but not a VLAN designation (like >>> DEVICE=eth0.10 where .10 indicates VLAN 10) and although configured for a >>> static setting (DHCP commented out) there was no IP address defined. >>> >>> So maybe take a step back. Delete all the networks and routers (might >>> need to do that from the CLI if things are stuck), on your Neutron node >>> backup & delete ifcfg-br-ex and restore a working ifcfg-eth0, then restart >>> the network and restart the Open vSwitch service on your neutron node so it >>> detects previous stuff is gone (check with ovs-vsctl show), then start with >>> defining the ifcfg-br-ex device and make sure your network is OK first >>> (check with ip address show and restart the network and check again). Then >>> add ethX to br-ex: >>> # ovs-vsctl add-port br-ex ethX ; service network restart >>> Make sure you have access to a local console so you don't get locked out >>> if your network fails to restart. Then restart the Open vSwitch service. >>> >>> Then move on to create the tenant stuff you'll need. I don't know how >>> you installed RDO. If you used Packstack and want VLAN tenant separation >>> then you have already provided VLAN info and you should use that when >>> setting things up with something like: >>> >>> As regular user: >>> the router >>> the private network >>> the private subnet >>> add private subnet to router >>> >>> As admin: >>> the public network (to be used for example to access the Internet) >>> the public subnet >>> add public gateway on the router >>> >>> As regular user: >>> Create some floating IPs >>> Start an instance of for example the Cirros image >>> Assign a floating IP address >>> Once booted log into it via the console, ping local & remote addresses. >>> Hopefully shout "YES!" :) >>> >>> FWIW: If you want VLANs for tenant separation then VXLAN and GRE are >>> much easier: Read Rhyz's explanation (5th comment) why: >>> https://openstack.redhat.com/forum/discussion/626/help- >>> with-neutron-networking/p1 >>> >>> HTH, >>> Patrick >>> >>> On 12-12-14 02:00, brian lee wrote: >>> >>>> I have been working on this for days now and I just can not figure it >>>> out. Attached is a bit from horizon where it is showing both interfaces >>>> on the router as down. How can I find out what is preventing them from >>>> starting? >>>> >>>> ? >>>> >>>> --Brian >>>> >>>> On Thu, Dec 11, 2014 at 10:28 AM, brian lee >>> > wrote: >>>> >>>> Man my copy and paste just is not liking me. Anyways, I saw posting >>>> about forcing the mac address every time, but I have not had a >>>> problem. >>>> My problem is the port does not become active. I included the device >>>> settings as a reference. This is the status of the port: >>>> >>>> +-----------------------+----------------------------------- >>>> --------------------------------------------------+ >>>> | Field | Value >>>> | >>>> +-----------------------+----------------------------------- >>>> --------------------------------------------------+ >>>> | admin_state_up | True >>>> | >>>> | allowed_address_pairs | >>>> | >>>> | binding:host_id | openstack-1.quicksand.bitc. >>>> morphotrust.com >>>> >>>> | >>>> | binding:profile | {} >>>> | >>>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>>> true} | >>>> | binding:vif_type | ovs >>>> | >>>> | binding:vnic_type | normal >>>> | >>>> | device_id | 7319781c-6186-4684-ba60-260b5ecee97c >>>> | >>>> | device_owner | network:router_gateway >>>> | >>>> | extra_dhcp_opts | >>>> | >>>> | fixed_ips | {"subnet_id": >>>> "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": >>>> "10.30.1.10"} | >>>> | id | 161de698-1666-4c0d-9248-8de900797301 >>>> | >>>> | mac_address | fa:16:3e:c9:ff:64 >>>> | >>>> | name | >>>> | >>>> | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 >>>> | >>>> | security_groups | >>>> | >>>> | status | DOWN >>>> | >>>> | tenant_id | >>>> | >>>> +-----------------------+----------------------------------- >>>> --------------------------------------------------+ >>>> >>>> I am just not able to get that port up. And since its not up I cant >>>> ping/ssh to the VMs. What do I need to do for vlans on my physical >>>> switch? >>>> >>>> --Brian >>>> >>>> On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock >>>> > wrote: >>>> >>>> Hi Brian, >>>> >>>> On 11-12-14 16:15, brian lee wrote: >>>> >>>> It looks like my cute and paste did not work right. My br-ex >>>> device >>>> looks like this: >>>> >>>> DEVICE=br-ex >>>> OVSBOOTPROTO="dhcp" >>>> OVSDHCPINTERFACES="eth0" >>>> ONBOOT=yes >>>> NM_CONTROLLED=no >>>> TYPE=OVSBridge >>>> DEVICETYPE=ovs >>>> DEVICE=br-ex >>>> OVSBOOTPROTO="dhcp" >>>> OVSDHCPINTERFACES="eth0" >>>> ONBOOT=yes >>>> NM_CONTROLLED=no >>>> TYPE=OVSBridge >>>> DEVICETYPE=ovs >>>> >>>> Sorry about the confusion. >>>> >>>> >>>> I use RDO Juno and here are my interfaces: >>>> >>>> [root at neutron1-1 network-scripts]# cat ifcfg-br-ex >>>> DEVICE=br-ex >>>> TYPE=OVSBridge >>>> DEVICETYPE=ovs >>>> OVSBOOTPROTO=dhcp >>>> OVSDHCPINTERFACES=eth1 >>>> MACADDR="00:01:02:03:04:05" >>>> OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" >>>> ONBOOT=yes >>>> NM_CONTROLLED=no >>>> >>>> >>>> [root at neutron1-1 network-scripts]# cat ifcfg-eth1 >>>> DEVICE=eth1 >>>> TYPE=OVSPort >>>> DEVICETYPE=ovs >>>> OVS_BRIDGE=br-ex >>>> ONBOOT=yes >>>> BOOTPROTO=none >>>> NM_CONTROLLED=no >>>> >>>> HTH, >>>> Patrick >>>> >>>> >>>> _________________________________________________ >>>> Rdo-list mailing list >>>> Rdo-list at redhat.com >>>> https://www.redhat.com/__mailman/listinfo/rdo-list >>>> >>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri Dec 12 17:00:34 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 12 Dec 2014 12:00:34 -0500 Subject: [Rdo-list] Fedora 21 Cloud image In-Reply-To: References: , , <5489BFEC.2000801@laimbock.com>, , , <548A50D8.8000405@laimbock.com>, , , Message-ID: Current F21 cloud has size 3.2 GB [boris at juno1 Downloads]$ ls -l *.raw -rw-rw-r--. 1 boris boris 3221225472 Dec 12 19:46 Fedora-Cloud-Base-20141203-21.x86_64.raw It's possible upload it via glance. Attempt to launch VM based on this image with flavour m1.medium fails with message "flavour insufficient to load VM" Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri Dec 12 21:21:48 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 12 Dec 2014 16:21:48 -0500 Subject: [Rdo-list] Fedora 21 Cloud image In-Reply-To: References: , , , , <5489BFEC.2000801@laimbock.com>, , , , , , <548A50D8.8000405@laimbock.com>, , , , , , , Message-ID: Sorry, $ qemu-img convert -f raw -O qcow2 Fedora-Cloud-Base-20141203-21.x86_64.raw Fedora-Cloud-Base-20141203-21.x86_64.qcow2 $ ls -l Fedora-Cloud-Base-*.qcow2 -rw-r--r--. 1 boris boris 441647104 Dec 13 00:14 Fedora-Cloud-Base-20141203-21.x86_64.qcow2 From: bderzhavets at hotmail.com To: rdo-list at redhat.com Date: Fri, 12 Dec 2014 12:00:34 -0500 Subject: [Rdo-list] Fedora 21 Cloud image Current F21 cloud has size 3.2 GB [boris at juno1 Downloads]$ ls -l *.raw -rw-rw-r--. 1 boris boris 3221225472 Dec 12 19:46 Fedora-Cloud-Base-20141203-21.x86_64.raw It's possible upload it via glance. Attempt to launch VM based on this image with flavour m1.medium fails with message "flavour insufficient to load VM" Thanks. Boris. _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at brianlee.org Sat Dec 13 03:14:28 2014 From: brian at brianlee.org (brian lee) Date: Fri, 12 Dec 2014 21:14:28 -0600 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: I am again stumped on this problem. The VMs are able to spin up but just do not get a IP address. All of the services are happy when I do neutron agent-list. Nothing is jumping out in the log files to me. Any idea's? --Brian On Fri, Dec 12, 2014 at 10:41 AM, brian lee wrote: > > PROGRESS! > > I was able to get my external network interface talking to the network. > This is a first for me. So I can ping the router_gateway, even though it > says its down. > > But when I spin up a VM, it is not getting a address for the private > network, and associating a float doesn't connect to the VM either. > > So its getting closer. > > What does the network config need to look like on the compute nodes? > > --Brian > > On Thu, Dec 11, 2014 at 9:24 PM, brian lee wrote: >> >> Another follow up: What needs to be configured on the compute nodes? >> >> --Brian >> >> On Thu, Dec 11, 2014 at 9:07 PM, brian lee wrote: >>> >>> Hi Patrick, >>> >>> Thanks for the info, it is slowly coming together for me, I hope. I do >>> have a few more question and I hope it will clear up more. First let me >>> describe my environment more. I am using foreman to manage the physical >>> hosts, and once openstack is running it will manage the VMs as well. So >>> that is why I have a DHCP address for the host, its a static lease from >>> foreman. >>> >>> My physical environment is in a blade center that has two switches in >>> it. One switch is for eth0 and the other is for eth1. For the controller >>> host (Everything but nova compute) the switch is configured for trunked >>> vlan 111 (Management) and 110 (tenets) for both eth0 and eth1. For the >>> compute nodes, the switches are configured for vlan 111 only. >>> >>> I am thinking on my controller host I need to configure the eth0.110 >>> device, give it a static IP and connect it to the br-ex, does that sound >>> right? >>> >>> I do also have some confusion about vxlan and how it is used. Is that >>> only in the "overlay" network? From what I understand it can have tens of >>> thousands of vlans, which the physical switches can not support. How does >>> the OS/physical network handle that? >>> >>> Do you have to use a non-admin project to create the private network? >>> >>> Thanks again for the feedback, I feel I am getting close to resolving >>> this. >>> >>> --Brian >>> >>> On Thu, Dec 11, 2014 at 8:20 PM, Patrick Laimbock >>> wrote: >>>> >>>> Hi Brian, >>>> >>>> Maybe there's a really simple solution but I don't have enough info to >>>> tell. So here's a "slightly" longer suggestion. >>>> >>>> For VLAN support on the *physical* network your switch will need to >>>> support 802.1Q. When you say VLANs what do you mean? If you want to use >>>> VLANs for tenant separation (so in the overlay network, not the physical >>>> network) then Open vSwitch will take of that and AFAIK (I don't use VLANs) >>>> you don't need to enable VLANs on your ifcfg devices. Unless your physical >>>> network requires VLANs off course. >>>> >>>> The interfaces you pasted had VLAN=yes but not a VLAN designation (like >>>> DEVICE=eth0.10 where .10 indicates VLAN 10) and although configured for a >>>> static setting (DHCP commented out) there was no IP address defined. >>>> >>>> So maybe take a step back. Delete all the networks and routers (might >>>> need to do that from the CLI if things are stuck), on your Neutron node >>>> backup & delete ifcfg-br-ex and restore a working ifcfg-eth0, then restart >>>> the network and restart the Open vSwitch service on your neutron node so it >>>> detects previous stuff is gone (check with ovs-vsctl show), then start with >>>> defining the ifcfg-br-ex device and make sure your network is OK first >>>> (check with ip address show and restart the network and check again). Then >>>> add ethX to br-ex: >>>> # ovs-vsctl add-port br-ex ethX ; service network restart >>>> Make sure you have access to a local console so you don't get locked >>>> out if your network fails to restart. Then restart the Open vSwitch service. >>>> >>>> Then move on to create the tenant stuff you'll need. I don't know how >>>> you installed RDO. If you used Packstack and want VLAN tenant separation >>>> then you have already provided VLAN info and you should use that when >>>> setting things up with something like: >>>> >>>> As regular user: >>>> the router >>>> the private network >>>> the private subnet >>>> add private subnet to router >>>> >>>> As admin: >>>> the public network (to be used for example to access the Internet) >>>> the public subnet >>>> add public gateway on the router >>>> >>>> As regular user: >>>> Create some floating IPs >>>> Start an instance of for example the Cirros image >>>> Assign a floating IP address >>>> Once booted log into it via the console, ping local & remote addresses. >>>> Hopefully shout "YES!" :) >>>> >>>> FWIW: If you want VLANs for tenant separation then VXLAN and GRE are >>>> much easier: Read Rhyz's explanation (5th comment) why: >>>> https://openstack.redhat.com/forum/discussion/626/help- >>>> with-neutron-networking/p1 >>>> >>>> HTH, >>>> Patrick >>>> >>>> On 12-12-14 02:00, brian lee wrote: >>>> >>>>> I have been working on this for days now and I just can not figure it >>>>> out. Attached is a bit from horizon where it is showing both interfaces >>>>> on the router as down. How can I find out what is preventing them from >>>>> starting? >>>>> >>>>> ? >>>>> >>>>> --Brian >>>>> >>>>> On Thu, Dec 11, 2014 at 10:28 AM, brian lee >>>> > wrote: >>>>> >>>>> Man my copy and paste just is not liking me. Anyways, I saw posting >>>>> about forcing the mac address every time, but I have not had a >>>>> problem. >>>>> My problem is the port does not become active. I included the >>>>> device >>>>> settings as a reference. This is the status of the port: >>>>> >>>>> +-----------------------+----------------------------------- >>>>> --------------------------------------------------+ >>>>> | Field | Value >>>>> | >>>>> +-----------------------+----------------------------------- >>>>> --------------------------------------------------+ >>>>> | admin_state_up | True >>>>> | >>>>> | allowed_address_pairs | >>>>> | >>>>> | binding:host_id | openstack-1.quicksand.bitc. >>>>> morphotrust.com >>>>> >>>>> | >>>>> | binding:profile | {} >>>>> | >>>>> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >>>>> true} | >>>>> | binding:vif_type | ovs >>>>> | >>>>> | binding:vnic_type | normal >>>>> | >>>>> | device_id | 7319781c-6186-4684-ba60-260b5ecee97c >>>>> | >>>>> | device_owner | network:router_gateway >>>>> | >>>>> | extra_dhcp_opts | >>>>> | >>>>> | fixed_ips | {"subnet_id": >>>>> "7761c2ee-e392-48ff-b69a-f0f10bbcb6db", "ip_address": >>>>> "10.30.1.10"} | >>>>> | id | 161de698-1666-4c0d-9248-8de900797301 >>>>> | >>>>> | mac_address | fa:16:3e:c9:ff:64 >>>>> | >>>>> | name | >>>>> | >>>>> | network_id | b10fc224-2332-49f5-b555-9090c3dc7f44 >>>>> | >>>>> | security_groups | >>>>> | >>>>> | status | DOWN >>>>> | >>>>> | tenant_id | >>>>> | >>>>> +-----------------------+----------------------------------- >>>>> --------------------------------------------------+ >>>>> >>>>> I am just not able to get that port up. And since its not up I cant >>>>> ping/ssh to the VMs. What do I need to do for vlans on my physical >>>>> switch? >>>>> >>>>> --Brian >>>>> >>>>> On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock >>>>> > wrote: >>>>> >>>>> Hi Brian, >>>>> >>>>> On 11-12-14 16:15, brian lee wrote: >>>>> >>>>> It looks like my cute and paste did not work right. My >>>>> br-ex >>>>> device >>>>> looks like this: >>>>> >>>>> DEVICE=br-ex >>>>> OVSBOOTPROTO="dhcp" >>>>> OVSDHCPINTERFACES="eth0" >>>>> ONBOOT=yes >>>>> NM_CONTROLLED=no >>>>> TYPE=OVSBridge >>>>> DEVICETYPE=ovs >>>>> DEVICE=br-ex >>>>> OVSBOOTPROTO="dhcp" >>>>> OVSDHCPINTERFACES="eth0" >>>>> ONBOOT=yes >>>>> NM_CONTROLLED=no >>>>> TYPE=OVSBridge >>>>> DEVICETYPE=ovs >>>>> >>>>> Sorry about the confusion. >>>>> >>>>> >>>>> I use RDO Juno and here are my interfaces: >>>>> >>>>> [root at neutron1-1 network-scripts]# cat ifcfg-br-ex >>>>> DEVICE=br-ex >>>>> TYPE=OVSBridge >>>>> DEVICETYPE=ovs >>>>> OVSBOOTPROTO=dhcp >>>>> OVSDHCPINTERFACES=eth1 >>>>> MACADDR="00:01:02:03:04:05" >>>>> OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" >>>>> ONBOOT=yes >>>>> NM_CONTROLLED=no >>>>> >>>>> >>>>> [root at neutron1-1 network-scripts]# cat ifcfg-eth1 >>>>> DEVICE=eth1 >>>>> TYPE=OVSPort >>>>> DEVICETYPE=ovs >>>>> OVS_BRIDGE=br-ex >>>>> ONBOOT=yes >>>>> BOOTPROTO=none >>>>> NM_CONTROLLED=no >>>>> >>>>> HTH, >>>>> Patrick >>>>> >>>>> >>>>> _________________________________________________ >>>>> Rdo-list mailing list >>>>> Rdo-list at redhat.com >>>>> https://www.redhat.com/__mailman/listinfo/rdo-list >>>>> >>>>> >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Sun Dec 14 13:05:19 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Sun, 14 Dec 2014 14:05:19 +0100 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: <548D8B0F.60007@laimbock.com> Hi Brian, On 12/12/2014 04:07 AM, brian lee wrote: > Hi Patrick, > > Thanks for the info, it is slowly coming together for me, I hope. I do > have a few more question and I hope it will clear up more. First let me > describe my environment more. I am using foreman to manage the physical > hosts, and once openstack is running it will manage the VMs as well. So > that is why I have a DHCP address for the host, its a static lease from > foreman. Got it. > My physical environment is in a blade center that has two switches in > it. One switch is for eth0 and the other is for eth1. For the controller > host (Everything but nova compute) the switch is configured for trunked > vlan 111 (Management) and 110 (tenets) for both eth0 and eth1. For the > compute nodes, the switches are configured for vlan 111 only. Have a look in this doc for the minimum required interfaces: http://docs.openstack.org/juno/install-guide/install/yum/content/ch_overview.html So a Controller node has at least 1 interface (combined mgmt & api) but in my experience usually 2 (mgmt, api) or 3 (mgmt, public api, private api). A Neutron node has 3 interfaces (mgmt/api, tunnel, external) or 4 if you want the api traffic separated, and a Compute node has at least 2 interfaces (mgmt, tunnel) or 3 (mgmt, tunnel, storage). With 2 physical interfaces you can bond/team them and just create a bunch of ethX.YYY VLAN interfaces to meet the requirements above. > I am thinking on my controller host I need to configure the eth0.110 > device, give it a static IP and connect it to the br-ex, does that sound > right? See above. > I do also have some confusion about vxlan and how it is used. Is that > only in the "overlay" network? You can use VXLAN both in the overlay and underlay aka physical network. In the overlay network it's all virtual and managed by Open vSwitch. In the underlay network it's configured on your physical nics and in your switches. > From what I understand it can have tens > of thousands of vlans, which the physical switches can not support. How > does the OS/physical network handle that? VXLAN (and GRE) can handle even way more than that. You will only see big numbers in really big Clouds and then only in the overlay part. The underlay part is still pretty standard: a physical interface on a Compute host dedicated to br-tun (so tunnelling VXLAN, GRE etc. traffic) can handle traffic for thousands of VXLANs simply because it's transparent. To the OS/switch it's just regular traffic going from A to B. > Do you have to use a non-admin project to create the private network? A tenant's private networks should be owned by that tenant. You can create it both as that tenant or as the admin. If you create it as the admin then you will need to specify the tenant. > Thanks again for the feedback, I feel I am getting close to resolving this. Hope you will get it working soon. HTH, Patrick > > On Thu, Dec 11, 2014 at 8:20 PM, Patrick Laimbock > wrote: > > Hi Brian, > > Maybe there's a really simple solution but I don't have enough info > to tell. So here's a "slightly" longer suggestion. > > For VLAN support on the *physical* network your switch will need to > support 802.1Q. When you say VLANs what do you mean? If you want to > use VLANs for tenant separation (so in the overlay network, not the > physical network) then Open vSwitch will take of that and AFAIK (I > don't use VLANs) you don't need to enable VLANs on your ifcfg > devices. Unless your physical network requires VLANs off course. > > The interfaces you pasted had VLAN=yes but not a VLAN designation > (like DEVICE=eth0.10 where .10 indicates VLAN 10) and although > configured for a static setting (DHCP commented out) there was no IP > address defined. > > So maybe take a step back. Delete all the networks and routers > (might need to do that from the CLI if things are stuck), on your > Neutron node backup & delete ifcfg-br-ex and restore a working > ifcfg-eth0, then restart the network and restart the Open vSwitch > service on your neutron node so it detects previous stuff is gone > (check with ovs-vsctl show), then start with defining the > ifcfg-br-ex device and make sure your network is OK first (check > with ip address show and restart the network and check again). Then > add ethX to br-ex: > # ovs-vsctl add-port br-ex ethX ; service network restart > Make sure you have access to a local console so you don't get locked > out if your network fails to restart. Then restart the Open vSwitch > service. > > Then move on to create the tenant stuff you'll need. I don't know > how you installed RDO. If you used Packstack and want VLAN tenant > separation then you have already provided VLAN info and you should > use that when setting things up with something like: > > As regular user: > the router > the private network > the private subnet > add private subnet to router > > As admin: > the public network (to be used for example to access the Internet) > the public subnet > add public gateway on the router > > As regular user: > Create some floating IPs > Start an instance of for example the Cirros image > Assign a floating IP address > Once booted log into it via the console, ping local & remote > addresses. Hopefully shout "YES!" :) > > FWIW: If you want VLANs for tenant separation then VXLAN and GRE are > much easier: Read Rhyz's explanation (5th comment) why: > https://openstack.redhat.com/__forum/discussion/626/help-__with-neutron-networking/p1 > > > HTH, > Patrick > > On 12-12-14 02:00, brian lee wrote: > > I have been working on this for days now and I just can not > figure it > out. Attached is a bit from horizon where it is showing both > interfaces > on the router as down. How can I find out what is preventing > them from > starting? > > ? > > --Brian > > On Thu, Dec 11, 2014 at 10:28 AM, brian lee > >> wrote: > > Man my copy and paste just is not liking me. Anyways, I saw > posting > about forcing the mac address every time, but I have not > had a problem. > My problem is the port does not become active. I included > the device > settings as a reference. This is the status of the port: > > > +-----------------------+-----__------------------------------__------------------------------__--------------------+ > | Field | Value > | > > +-----------------------+-----__------------------------------__------------------------------__--------------------+ > | admin_state_up | True > | > | allowed_address_pairs | > | > | binding:host_id | > openstack-1.quicksand.bitc.__morphotrust.com > > > > | > | binding:profile | {} > | > | binding:vif_details | {"port_filter": true, > "ovs_hybrid_plug": > true} | > | binding:vif_type | ovs > | > | binding:vnic_type | normal > | > | device_id | > 7319781c-6186-4684-ba60-__260b5ecee97c > | > | device_owner | network:router_gateway > | > | extra_dhcp_opts | > | > | fixed_ips | {"subnet_id": > "7761c2ee-e392-48ff-b69a-__f0f10bbcb6db", "ip_address": > "10.30.1.10"} | > | id | > 161de698-1666-4c0d-9248-__8de900797301 > | > | mac_address | fa:16:3e:c9:ff:64 > | > | name | > | > | network_id | > b10fc224-2332-49f5-b555-__9090c3dc7f44 > | > | security_groups | > | > | status | DOWN > | > | tenant_id | > | > > +-----------------------+-----__------------------------------__------------------------------__--------------------+ > > I am just not able to get that port up. And since its not > up I cant > ping/ssh to the VMs. What do I need to do for vlans on my > physical > switch? > > --Brian > > On Thu, Dec 11, 2014 at 10:01 AM, Patrick Laimbock > > >> wrote: > > Hi Brian, > > On 11-12-14 16:15, brian lee wrote: > > It looks like my cute and paste did not work right. > My br-ex > device > looks like this: > > DEVICE=br-ex > OVSBOOTPROTO="dhcp" > OVSDHCPINTERFACES="eth0" > ONBOOT=yes > NM_CONTROLLED=no > TYPE=OVSBridge > DEVICETYPE=ovs > DEVICE=br-ex > OVSBOOTPROTO="dhcp" > OVSDHCPINTERFACES="eth0" > ONBOOT=yes > NM_CONTROLLED=no > TYPE=OVSBridge > DEVICETYPE=ovs > > Sorry about the confusion. > > > I use RDO Juno and here are my interfaces: > > [root at neutron1-1 network-scripts]# cat ifcfg-br-ex > DEVICE=br-ex > TYPE=OVSBridge > DEVICETYPE=ovs > OVSBOOTPROTO=dhcp > OVSDHCPINTERFACES=eth1 > MACADDR="00:01:02:03:04:05" > OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR" > ONBOOT=yes > NM_CONTROLLED=no > > > [root at neutron1-1 network-scripts]# cat ifcfg-eth1 > DEVICE=eth1 > TYPE=OVSPort > DEVICETYPE=ovs > OVS_BRIDGE=br-ex > ONBOOT=yes > BOOTPROTO=none > NM_CONTROLLED=no > > HTH, > Patrick > > > ___________________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > > > https://www.redhat.com/____mailman/listinfo/rdo-list > > > > > From patrick at laimbock.com Sun Dec 14 13:06:46 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Sun, 14 Dec 2014 14:06:46 +0100 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: <548D8B66.9020608@laimbock.com> Hi Brian, On 12/12/2014 04:24 AM, brian lee wrote: > Another follow up: What needs to be configured on the compute nodes? I think I answered that in my reply to your other post. If it's still unclear let me know. Cheers, Patrick From patrick at laimbock.com Sun Dec 14 13:26:50 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Sun, 14 Dec 2014 14:26:50 +0100 Subject: [Rdo-list] Neutron Problems In-Reply-To: References: <5489BFEC.2000801@laimbock.com> <548A50D8.8000405@laimbock.com> Message-ID: <548D901A.4000904@laimbock.com> Hi Brian, On 12/13/2014 04:14 AM, brian lee wrote: > I am again stumped on this problem. The VMs are able to spin up but just > do not get a IP address. All of the services are happy when I do neutron > agent-list. Nothing is jumping out in the log files to me. > > Any idea's? Seem like the DHCP part isn't working. Many similar reports can be found in Google. Perhaps the suggestions can help. If you can't figure it out have a look at the Neutron networking troubleshooting part here: http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html If your network setup was incorrect to start with than troubleshooting the DHCP issue is like entering an infinite rabbit hole :P I haven't heard of OpenStack deployments with Foreman much. I have no idea what the status is of that deployment method. You can always use the RDO CentOS install guide to walk through it, compare with your config files and verify that your settings are OK. Alternatively maybe try Packstack? Whatever you do, make sure you first have the network interfaces & IP addresses on those interfaces (VLANs) figured out. AFAIK both Foreman and Packstack use the same Puppet modules and at least Packstack assumes a consistent interface naming across nodes. HTH, Patrick From meilei007 at gmail.com Mon Dec 15 02:22:23 2014 From: meilei007 at gmail.com (lei mei) Date: Mon, 15 Dec 2014 10:22:23 +0800 Subject: [Rdo-list] default password of rdo pre-build image Message-ID: Hi Guys, I download the pre-build imalge for openstack on below page: https://openstack.redhat.com/Image_resources Specifically, I download this image: http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2 But when I start new instance by using this image, I dont know the default root password of this cloud image, any idea? Best Regards, -Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Mon Dec 15 09:02:21 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 15 Dec 2014 10:02:21 +0100 Subject: [Rdo-list] default password of rdo pre-build image In-Reply-To: References: Message-ID: Lei, The image has no known root password and that's good so. You need to ssh into the instance with: ssh -i centos@ Best, -Arash On Mon, Dec 15, 2014 at 3:22 AM, lei mei wrote: > > Hi Guys, > I download the pre-build imalge for openstack on below page: > https://openstack.redhat.com/Image_resources > Specifically, I download this image: > http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2 > But when I start new instance by using this image, I dont know the > default root password of this cloud image, any idea? > > > Best Regards, > -Andy > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Dec 15 10:06:40 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 15 Dec 2014 11:06:40 +0100 Subject: [Rdo-list] RDO Bug triage day tomorrow [16DEC2014] Message-ID: <20141215100640.GC19788@tesla.redhat.com> Heya, Tomorrow (3rd Tuesday of the month) is the 'official' RDO bug triage day. If you have some spare cycles, please join us in helping triage bugs/root-cause analysis in your area of expertise. Here's some details to get started[1] with bug triaging. Briefly, current state[*] of RDO bugs as of today: - NEW, ASSIGNED, ON_DEV: 205 - MODIFIED, POST, ON_QA: 149 - VERIFIED : 13 A few useful Bugzilla queries are here[1]. All the bugs with their descriptions in plain text here[2]. [1] https://openstack.redhat.com/RDO-BugTriage#Bugzilla_queries [2] https://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/all-rdo-bugs-15-12-2014.txt -- /kashyap From pmyers at redhat.com Mon Dec 15 12:21:37 2014 From: pmyers at redhat.com (Perry Myers) Date: Mon, 15 Dec 2014 07:21:37 -0500 Subject: [Rdo-list] default password of rdo pre-build image In-Reply-To: References: Message-ID: <548ED251.6020800@redhat.com> On 12/15/2014 04:02 AM, Arash Kaffamanesh wrote: > Lei, > The image has no known root password and that's good so. > You need to ssh into the instance with: > ssh -i centos@ Or, before importing into Glance, you can use a tool like virt-sysprep to insert a root password into the image. $ sudo yum install libguestfs-tools-c $ virt-sysprep --root-password password:password -a imagefile.qcow2 See man virt-sysprep for info on how to use the --root-password cmdline option. Cheers, Perry From rbowen at redhat.com Mon Dec 15 18:38:43 2014 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 15 Dec 2014 13:38:43 -0500 Subject: [Rdo-list] RDO/OpenStack meetups coming up (December 15, 2014) Message-ID: <548F2AB3.10906@redhat.com> The following are the meetups I'm aware of in the coming week where RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://openstack.redhat.com/Events If there's a meetup in your area, please consider attending. It's the best way to find out what interesting things are going on in the larger community, and a great way to make contacts that will help you solve your own problems in the future. And don't forget to blog about it, tweet about it, G+ about it. --Rich * Wednesday, December 17 in Austin, TX, US: CloudAustin December: The Twelve Clouds of Christmas - http://www.meetup.com/CloudAustin/events/212248062/ * Wednesday, December 17 in Mountain View, CA, US: A production deployment of SaltStack 2014.7 - http://www.meetup.com/SaltStack-user-group-Silicon-Valley/events/219088938/ * Tuesday, December 16 in Belfast, GB: DevOps Belfast - OpenStack, building your own cloud. - http://www.meetup.com/DevOps-Belfast/events/215956092/ * Wednesday, December 17 in Montevideo, UY: Openstack - nova boot --flavor m1.tiny meetup0 - http://www.meetup.com/DevOps-MVD/events/213384542/ * Thursday, December 18 in Mountain View, CA, US: Group based Policy in OpenDaylight - http://www.meetup.com/OpenDaylight-Silicon-Valley/events/219221090/ * Thursday, December 18 in Portland, OR, US: OSNW Birthday: Beat the Holidays with an extra dose of knowledge - http://www.meetup.com/OpenStack-Northwest/events/218941697/ * Wednesday, December 17 in Berlin, DE: OpenStack DACH Day 2015: Vereinsgr?ndung - http://www.meetup.com/openstack-de/events/219117732/ * Thursday, December 18 in Mountain View, CA, US: Online Meetup: DefCore - making OpenStack standard and interoperable - http://www.meetup.com/Cloud-Online-Meetup/events/219190801/ * Thursday, December 18 in New York, NY, US: "Is OpenStack ready for Enterprises?" - http://www.meetup.com/OpenStack-for-Enterprises-NYC/events/218900712/ * Friday, December 19 in Whittier, CA, US: Introduction to Red Hat and OpenShift (cohost with South Bay LAJUG) - http://www.meetup.com/Greater-Los-Angeles-Area-Red-Hat-User-Group-RHUG/events/217273042/ * Friday, December 19 in San Francisco, CA, US: South Bay OpenStack Meetup, Beginner track - http://www.meetup.com/openstack/events/218900735/ * Friday, December 19 in Atlanta, GA, US: OpenStack Meetup (Topic TBD) - http://www.meetup.com/openstack-atlanta/events/218782182/ * Saturday, December 20 in Beijing, CN: SDN Tech Talk:OpenStack Networking(Neutron) &ONOS(Open Network Operating System) - http://www.meetup.com/sdneer/events/219249488/ * Sunday, December 21 in Beijing, CN: OpenStack???? - http://www.meetup.com/China-OpenStack-User-Group/events/219206776/ -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From rdo-info at redhat.com Mon Dec 15 21:10:26 2014 From: rdo-info at redhat.com (RDO Forum) Date: Mon, 15 Dec 2014 21:10:26 +0000 Subject: [Rdo-list] [RDO] Blog roundup, week of December 8th Message-ID: <0000014a4fc9b4d6-13fd35ab-4a9a-4e39-b306-0508240ba13f-000000@email.amazonses.com> rbowen started a discussion. Blog roundup, week of December 8th --- Follow the link below to check it out: https://openstack.redhat.com/forum/discussion/996/blog-roundup-week-of-december-8th Have a great day! From contact at progbau.de Tue Dec 16 04:46:35 2014 From: contact at progbau.de (Chris) Date: Tue, 16 Dec 2014 11:46:35 +0700 Subject: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 Message-ID: <000001d018eb$47e14070$d7a3c150$@progbau.de> Hello We have an relatively big OpenStack setup (> 150 Compute Nodes) based on CentOS 6.5 with the RDO Icehouse release. We now considering an upgrade to Juno, is there a best practice out there how to do it and how is it with the depending CentOS upgrade to 6.6 or even 7.0? Any help is appreciated! Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristi.falcas at gmail.com Tue Dec 16 07:47:10 2014 From: cristi.falcas at gmail.com (Cristian Falcas) Date: Tue, 16 Dec 2014 09:47:10 +0200 Subject: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <000001d018eb$47e14070$d7a3c150$@progbau.de> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> Message-ID: I think it will be better to redeploy the compute nodes from scratch. Juno is supported on el7 only. >From my experience with upgrading from 6 to 7, it's a lot of hassle, because the upgrade process only takes care of base the programs. After the upgrade you will have to upgrade/downgrade/remove manually the packages that where not updated. Best regards, Cristian Falcas On Tue, Dec 16, 2014 at 6:46 AM, Chris wrote: > Hello > > > > We have an relatively big OpenStack setup (> 150 Compute Nodes) based on > CentOS 6.5 with the RDO Icehouse release. > > We now considering an upgrade to Juno, is there a best practice out there > how to do it and how is it with the depending CentOS upgrade to 6.6 or even > 7.0? > > > > Any help is appreciated! > > > > Thanks, > > Chris > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From contact at progbau.de Tue Dec 16 09:59:35 2014 From: contact at progbau.de (Chris) Date: Tue, 16 Dec 2014 16:59:35 +0700 Subject: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: References: <000001d018eb$47e14070$d7a3c150$@progbau.de> Message-ID: <045f01d01917$01a70690$04f513b0$@progbau.de> Hi, What about the Instances running on the Compute Nodes? It's totally not an option to "lose" the existing Instances. -----Original Message----- From: Cristian Falcas [mailto:cristi.falcas at gmail.com] Sent: Tuesday, December 16, 2014 14:47 To: Chris Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 I think it will be better to redeploy the compute nodes from scratch. Juno is supported on el7 only. >From my experience with upgrading from 6 to 7, it's a lot of hassle, because the upgrade process only takes care of base the programs. After the upgrade you will have to upgrade/downgrade/remove manually the packages that where not updated. Best regards, Cristian Falcas On Tue, Dec 16, 2014 at 6:46 AM, Chris wrote: > Hello > > > > We have an relatively big OpenStack setup (> 150 Compute Nodes) based > on CentOS 6.5 with the RDO Icehouse release. > > We now considering an upgrade to Juno, is there a best practice out > there how to do it and how is it with the depending CentOS upgrade to > 6.6 or even 7.0? > > > > Any help is appreciated! > > > > Thanks, > > Chris > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From bderzhavets at hotmail.com Tue Dec 16 10:06:59 2014 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Tue, 16 Dec 2014 05:06:59 -0500 Subject: [Rdo-list] How much unstable is https://github.com/stackforge/nova-docker.git ? In-Reply-To: <20141215100640.GC19788@tesla.redhat.com> References: <20141215100640.GC19788@tesla.redhat.com> Message-ID: Check out suggested in https://ask.openstack.org/en/question/49874/nova-docker-issue-import-error-no-module-named-i18n/ by Lars Kellogg Stedman pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker cd src/novadocker/ git checkout -b pre-i18n 9045ca43b645e72751099491bf5f4f9e4bddbb91 python setup.py installworks fine. However, python-oslo-i18n-1.0.0-1.el7.centos.noarch is installed on CentOS 7, I would expect to exist more recent commit in master or another branch suitable for build nova-docker driver on Juno. Attempt to clone all the tree gave me system unable to start containers on Juno. I also had no luck with commit 3fd99f8516f890b45b928b0bce4439bb003c0bb1 Thanks Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristi.falcas at gmail.com Tue Dec 16 12:14:50 2014 From: cristi.falcas at gmail.com (Cristian Falcas) Date: Tue, 16 Dec 2014 14:14:50 +0200 Subject: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <045f01d01917$01a70690$04f513b0$@progbau.de> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <045f01d01917$01a70690$04f513b0$@progbau.de> Message-ID: You should move them to other compute nodes: http://docs.openstack.org/openstack-ops/content/maintenance.html On Tue, Dec 16, 2014 at 11:59 AM, Chris wrote: > Hi, > > What about the Instances running on the Compute Nodes? It's totally not an option to "lose" the existing Instances. > > -----Original Message----- > From: Cristian Falcas [mailto:cristi.falcas at gmail.com] > Sent: Tuesday, December 16, 2014 14:47 > To: Chris > Cc: rdo-list at redhat.com > Subject: Re: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 > > I think it will be better to redeploy the compute nodes from scratch. > Juno is supported on el7 only. > > From my experience with upgrading from 6 to 7, it's a lot of hassle, because the upgrade process only takes care of base the programs. > After the upgrade you will have to upgrade/downgrade/remove manually the packages that where not updated. > > Best regards, > Cristian Falcas > > On Tue, Dec 16, 2014 at 6:46 AM, Chris wrote: >> Hello >> >> >> >> We have an relatively big OpenStack setup (> 150 Compute Nodes) based >> on CentOS 6.5 with the RDO Icehouse release. >> >> We now considering an upgrade to Juno, is there a best practice out >> there how to do it and how is it with the depending CentOS upgrade to >> 6.6 or even 7.0? >> >> >> >> Any help is appreciated! >> >> >> >> Thanks, >> >> Chris >> >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > From rbowen at redhat.com Tue Dec 16 14:27:19 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 16 Dec 2014 09:27:19 -0500 Subject: [Rdo-list] Fwd: [openstack-community] [Call for Speaker, Sponsor, Participants] OpenStack Day in Korea - Feb. 05, 2015 In-Reply-To: References: Message-ID: <54904147.5090600@redhat.com> For those of you near Seoul, you'll want to put this on your calendar. --Rich -------- Forwarded Message -------- Subject: [openstack-community] [Call for Speaker, Sponsor, Participants] OpenStack Day in Korea - Feb. 05, 2015 Date: Tue, 16 Dec 2014 13:40:06 +0900 From: Jaesuk Ahn To: Community User Groups, OpenStack As it appeared at this week's OpenStack weekly newsletter, OpenStack Korea User Group is having OpenStack Day in Korea event.? This is our 2nd annual OpenStack Event in Korea.? We are looking for Speakers, Sponsors, and Grand Challenge Participants.? Anyone can apply for anything. It will be fun and interesting joining us at Seoul, Korea.? - Date: Feb. 05, 2015 - Title: Beyond OpenStack: - Services, Applications, and Platforms ? - Location: Seoul, Korea - Expected Participants: 800 ~ 1,000? - Call for Speakers: Please fill the following form to apply for the speaker (https://docs.google.com/forms/d/1rXADrkePwXhbhY6Pb0i-utYuI5-GmFhhhhqy_MMMmA8/viewform?c=0&w=1) - Call for Sponsor: Please send me an email if you are interested in sponsoring the event. (https://drive.google.com/file/d/0BxfgJpcSBi5aaTE5MERaN3F3ekE/view?usp=sharing)? - Call for Grand Challenge Participants: We are looking for participants for OpenStack Automation Grand Challenge Program. It is simply a contest to automatically deploy OpenStack Juno. More detail program will be posted soon. Please contact me if you are interested in participating in this really fun challenge. ? Thank you? -- *Jaesuk Ahn*, Ph.D. ... active member of OpenStack Community... -------------- next part -------------- _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community From dmitry at athabascau.ca Tue Dec 16 14:33:50 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Tue, 16 Dec 2014 07:33:50 -0700 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available In-Reply-To: <548AE5A3.50204@redhat.com> References: <548A1BFC.4020102@athabascau.ca> <548A3DD0.7040509@athabascau.ca> <548AE5A3.50204@redhat.com> Message-ID: <549042CE.7090706@athabascau.ca> On 12/12/2014 05:54 AM, Ihar Hrachyshka wrote: >>> My current dnsmasq set is: >>> >>> # rpm -qa | grep dnsmasq dnsmasq-utils-2.48-14.el6.x86_64 >>> dnsmasq-2.48-14.el6.x86_64 > >> after downgrading packages to 2.48-13 and restarting services looks >> like things are back under control... > > This sounds like a bug. Can you report it? sure thing - I'll be re-installing our environment shortly which will give me a chance to reproduce the behaviour. Once confirmed I'll file the bug. >> interestingly enough nobody mentioned that "conntrac-tools" needs >> to be installed... ;) > > There was an issue in upstream backporting process when a patch that > introduced that new dependency sneaked into upstream Icehouse stable > branch. > > The issue was mentioned in release notes: > https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.3#Known_Issues_and_Limitations thanks for the pointer! > The patch that introduced that runtime dependency was reverted in > upstream and will be released for the next (2014.1.4) Icehouse release. Does that mean that Juno has no such dependency? -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From dmitry at athabascau.ca Tue Dec 16 14:36:53 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Tue, 16 Dec 2014 07:36:53 -0700 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available In-Reply-To: <548AE5A3.50204@redhat.com> References: <548A1BFC.4020102@athabascau.ca> <548A3DD0.7040509@athabascau.ca> <548AE5A3.50204@redhat.com> Message-ID: <54904385.9080602@athabascau.ca> On 12/12/2014 05:54 AM, Ihar Hrachyshka wrote: >>> My current dnsmasq set is: >>> >>> # rpm -qa | grep dnsmasq dnsmasq-utils-2.48-14.el6.x86_64 >>> dnsmasq-2.48-14.el6.x86_64 > >> after downgrading packages to 2.48-13 and restarting services looks >> like things are back under control... > > This sounds like a bug. Can you report it? sure thing - I'll be re-installing our environment shortly which will give me a chance to reproduce the behaviour. Once confirmed I'll file the bug. >> interestingly enough nobody mentioned that "conntrac-tools" needs >> to be installed... ;) > > There was an issue in upstream backporting process when a patch that > introduced that new dependency sneaked into upstream Icehouse stable > branch. > > The issue was mentioned in release notes: > https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.3#Known_Issues_and_Limitations thanks for the pointer! > The patch that introduced that runtime dependency was reverted in > upstream and will be released for the next (2014.1.4) Icehouse release. Does that mean that Juno has no such dependency? -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From ihrachys at redhat.com Tue Dec 16 14:44:24 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Tue, 16 Dec 2014 15:44:24 +0100 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available In-Reply-To: <549042CE.7090706@athabascau.ca> References: <548A1BFC.4020102@athabascau.ca> <548A3DD0.7040509@athabascau.ca> <548AE5A3.50204@redhat.com> <549042CE.7090706@athabascau.ca> Message-ID: <54904548.9040601@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 16/12/14 15:33, Dmitry Makovey wrote: >>> The patch that introduced that runtime dependency was reverted >>> in upstream and will be released for the next (2014.1.4) >>> Icehouse release. > Does that mean that Juno has no such dependency? No, it's relevant for Icehouse only. Juno release still requires conntrack-tools for floating IP connection termination. /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUkEVIAAoJEC5aWaUY1u57I1wH/ixzEdBdGn07+EvcLCYIjz8P sUrL15Dyr51bXu9lvxFG00KUGtHva52t2u8kziZVCObflzyo4nzqoWxp7NaLM0Y8 HI/+R/orLv+EQ0k5FhplX2dIjjdVw1HwW69IyCD8dUbP9uOSBIQQlr17I72ZaFGb +vtRa0oKn5Wr8Zn4zsPq3xY+tuY1NiNaPqmuR5JA+LbRAU8xqcqwIFzu4vzJF9KV 9SqJ6lcw4OngYXK9mQ+aDJH+ex+bC6v3rMq8fqBjiewnQdsCf7otyI4uSbORtrQw QwPnsdg9ruzN2dKLvFJerGCjrNR9qz6d2exRLcQ+aVFRFmLsrr1VnmpwhdZFTKY= =Moe4 -----END PGP SIGNATURE----- From rbowen at redhat.com Tue Dec 16 16:11:24 2014 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 16 Dec 2014 11:11:24 -0500 Subject: [Rdo-list] RDO meetup at FOSDEM? Message-ID: <549059AC.4080807@redhat.com> In Paris, we had a great meeting of RDO enthusiasts, to talk about community involvement and related issues. However, we were in a very loud place, and most people found it very difficult to hear what was going on. Will there be enough people at FOSDEM to try to do this again there? It should be a little easier to get a room than it was in Paris, so we shouldn't have to shout quite so loud to be heard. Anyone interested enough to set aside time for this. (I know how crazy busy FOSDEM can be.) -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From kchamart at redhat.com Tue Dec 16 16:25:28 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 16 Dec 2014 17:25:28 +0100 Subject: [Rdo-list] RDO meetup at FOSDEM? In-Reply-To: <549059AC.4080807@redhat.com> References: <549059AC.4080807@redhat.com> Message-ID: <20141216162528.GC9012@tesla.redhat.com> On Tue, Dec 16, 2014 at 11:11:24AM -0500, Rich Bowen wrote: > In Paris, we had a great meeting of RDO enthusiasts, to talk about > community involvement and related issues. However, we were in a very > loud place, and most people found it very difficult to hear what was > going on. > > Will there be enough people at FOSDEM to try to do this again there? > It should be a little easier to get a room than it was in Paris, so we > shouldn't have to shout quite so loud to be heard. Little easier? I have a feeling that the FOSDEM venue is getting crowded than ever. > Anyone interested enough to set aside time for this. (I know how crazy > busy FOSDEM can be.) I'll be at FOSDEM. -- /kashyap From hguemar at fedoraproject.org Tue Dec 16 16:31:21 2014 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 16 Dec 2014 17:31:21 +0100 Subject: [Rdo-list] RDO meetup at FOSDEM? In-Reply-To: <20141216162528.GC9012@tesla.redhat.com> References: <549059AC.4080807@redhat.com> <20141216162528.GC9012@tesla.redhat.com> Message-ID: I should be present too (as every year). Regards, H. From teclus13 at gmail.com Tue Dec 16 18:13:51 2014 From: teclus13 at gmail.com (Teclus Dsouza) Date: Tue, 16 Dec 2014 23:43:51 +0530 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) Message-ID: Hello Team, I was trying to setup *Openstack* on a Virtual Machine using the instructions from below url https://openstack.redhat.com/Quickstart But after the # packstack --allinone it gives Errors on some dependencies for which I am not able to find any solution on the workarounds URL. I have attached the log file ** [root at rhel-rdo ~]# uname -a Linux rhel-rdo.td.com 3.10.0-123.13.1.el7.x86_64 #1 SMP Tue Nov 4 10:16:51 EST 2 014 x86_64 x86_64 x86_64 GNU/Linux Also I am not able to connect to the URL http://192.168.45.131/dashboard and gives the following Error >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [RocketTab] The socket connection to 192.168.45.131 failed. ErrorCode: 10060. A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.45.131:80 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would need some guidance to resolve these issues and help me deploy RHEL Openstack. Kindly revert back if you need any further details. Regards Teclus Dsouza [Systems Engineer] teclus13 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack-setup.log Type: application/octet-stream Size: 8193 bytes Desc: not available URL: From ALLAN.L.ST.GEORGE at leidos.com Tue Dec 16 21:40:03 2014 From: ALLAN.L.ST.GEORGE at leidos.com (St. George, Allan L. II) Date: Tue, 16 Dec 2014 21:40:03 +0000 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) In-Reply-To: References: Message-ID: Did you setup all the required repositories prior to installation? https://openstack.redhat.com/Repositories V/R, Allan L. St. George Integration Engineer Leidos ________________________________________ From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf of Teclus Dsouza [teclus13 at gmail.com] Sent: Tuesday, December 16, 2014 1:13 PM To: rdo-list at redhat.com Cc: Rajagopalan Varadan Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) Hello Team, I was trying to setup Openstack on a Virtual Machine using the instructions from below url https://openstack.redhat.com/Quickstart But after the # packstack --allinone it gives Errors on some dependencies for which I am not able to find any solution on the workarounds URL. I have attached the log file [root at rhel-rdo ~]# uname -a Linux rhel-rdo.td.com 3.10.0-123.13.1.el7.x86_64 #1 SMP Tue Nov 4 10:16:51 EST 2 014 x86_64 x86_64 x86_64 GNU/Linux Also I am not able to connect to the URL http://192.168.45.131/dashboard and gives the following Error >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [RocketTab] The socket connection to 192.168.45.131 failed. ErrorCode: 10060. A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.45.131:80 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I would need some guidance to resolve these issues and help me deploy RHEL Openstack. Kindly revert back if you need any further details. Regards Teclus Dsouza [Systems Engineer] teclus13 at gmail.com From ak at cloudssky.com Tue Dec 16 22:00:21 2014 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Tue, 16 Dec 2014 23:00:21 +0100 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) In-Reply-To: References: Message-ID: Hi Teclus, Perhaps this Single Line Installer might be of help: http://cloudssky.com/en/blog/OpenStack-RDO-AIO-Single-Line-Installer/ And I guess on RHEL also you've to set CONFIG_CEILOMETER_INSTALL=n in your answer file. Best, Arash On Tue, Dec 16, 2014 at 10:40 PM, St. George, Allan L. II < ALLAN.L.ST.GEORGE at leidos.com> wrote: > > Did you setup all the required repositories prior to installation? > > https://openstack.redhat.com/Repositories > > > V/R, > > Allan L. St. George > Integration Engineer > Leidos > ________________________________________ > From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf > of Teclus Dsouza [teclus13 at gmail.com] > Sent: Tuesday, December 16, 2014 1:13 PM > To: rdo-list at redhat.com > Cc: Rajagopalan Varadan > Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using > Packstack with RHEL 7.0 (Maipo) > > Hello Team, > > I was trying to setup Openstack on a Virtual Machine using the > instructions from below url > > https://openstack.redhat.com/Quickstart > > But after the # packstack --allinone it gives Errors on some dependencies > for which I am not able to find any solution on the workarounds URL. > > I have attached the log file > > [root at rhel-rdo ~]# uname -a > Linux rhel-rdo.td.com 3.10.0-123.13.1.el7.x86_64 > #1 SMP Tue Nov 4 10:16:51 EST 2 014 x86_64 > x86_64 x86_64 GNU/Linux > > Also I am not able to connect to the URL http://192.168.45.131/dashboard > and gives the following Error > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > [RocketTab] The socket connection to 192.168.45.131 failed. > ErrorCode: 10060. > A connection attempt failed because the connected party did not properly > respond after a period of time, or established connection failed because > connected host has failed to respond 192.168.45.131:80< > http://192.168.45.131:80> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > I would need some guidance to resolve these issues and help me deploy RHEL > Openstack. > > Kindly revert back if you need any further details. > > Regards > Teclus Dsouza > [Systems Engineer] > teclus13 at gmail.com > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed Dec 17 09:50:05 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 17 Dec 2014 10:50:05 +0100 Subject: [Rdo-list] RDO Bug triage day tomorrow [16DEC2014] In-Reply-To: <20141215100640.GC19788@tesla.redhat.com> References: <20141215100640.GC19788@tesla.redhat.com> Message-ID: <20141217095005.GG9012@tesla.redhat.com> On Mon, Dec 15, 2014 at 11:06:40AM +0100, Kashyap Chamarthy wrote: > Heya, > > Tomorrow (3rd Tuesday of the month) is the 'official' RDO bug triage > day. If you have some spare cycles, please join us in helping triage > bugs/root-cause analysis in your area of expertise. Here's some details > to get started[1] with bug triaging. > > Briefly, current state[*] of RDO bugs as of today: > > - NEW, ASSIGNED, ON_DEV: 205 > - MODIFIED, POST, ON_QA: 149 After the triage day, the above numbers look like that: - NEW, ASSIGNED, ON_DEV: 198 - MODIFIED, POST, ON_QA: 137 So, roughly, about 17 bugs or so are closed. It's worth bearing in mind, the above doesn't capture everything: some bugs need follow up/needinfo from reporters, waiting for a fix to be reviewed by other engineers, etc. For instance, there are about 20 bugs[1] that are in NEEDINFO as I write this, that also needed some triage analysis. I volunteered to look at Nova bugs and tried to segregate actual Nova bugs -- that resulted in this[2] (14 NEW bugs, 5 ASSIGNED and in-progress). Here too, some bugs are in "NEW" but they gone through some debugging/testing to narrow down root cause, so that doesn't reflect in the numbers too. You get the drift. :-) Thanks to Alan Pevec and others, who have helped triage. [1] https://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/rdo-bugs-in-needinfo-state-17DEC2014.txt [2] https://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/nova-rdo-bugs-16DEC2014.txt [3] All RDO bugs as of 17DEC214 -- https://kashyapc.fedorapeople.org/virt/openstack/rdo-bug-status/rdo-bugs-in-needinfo-state-17DEC2014.txt -- /kashyap From teclus13 at gmail.com Wed Dec 17 14:19:30 2014 From: teclus13 at gmail.com (Teclus Dsouza) Date: Wed, 17 Dec 2014 19:49:30 +0530 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) In-Reply-To: References: Message-ID: Hello George, Thanks this helped to get past the error but I am getting stuck when its installing horizon.pp >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 192.168.45.131_horizon.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.45.131_horizon.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-dashboard' returned 1: Error downloading packages: You will find full trace in log /var/tmp/packstack/20141217-032323-VmUBXr/manifests/192.168.45.131_horizon.pp.log Please check log file /var/tmp/packstack/20141217-032323-VmUBXr/openstack-setup.log for more information >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Any help on how to get this fixed. Regards Teclus Dsouza On Wed, Dec 17, 2014 at 3:10 AM, St. George, Allan L. II < ALLAN.L.ST.GEORGE at leidos.com> wrote: > > Did you setup all the required repositories prior to installation? > > https://openstack.redhat.com/Repositories > > > V/R, > > Allan L. St. George > Integration Engineer > Leidos > ________________________________________ > From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on behalf > of Teclus Dsouza [teclus13 at gmail.com] > Sent: Tuesday, December 16, 2014 1:13 PM > To: rdo-list at redhat.com > Cc: Rajagopalan Varadan > Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using > Packstack with RHEL 7.0 (Maipo) > > Hello Team, > > I was trying to setup Openstack on a Virtual Machine using the > instructions from below url > > https://openstack.redhat.com/Quickstart > > But after the # packstack --allinone it gives Errors on some dependencies > for which I am not able to find any solution on the workarounds URL. > > I have attached the log file > > [root at rhel-rdo ~]# uname -a > Linux rhel-rdo.td.com 3.10.0-123.13.1.el7.x86_64 > #1 SMP Tue Nov 4 10:16:51 EST 2 014 x86_64 > x86_64 x86_64 GNU/Linux > > Also I am not able to connect to the URL http://192.168.45.131/dashboard > and gives the following Error > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > [RocketTab] The socket connection to 192.168.45.131 failed. > ErrorCode: 10060. > A connection attempt failed because the connected party did not properly > respond after a period of time, or established connection failed because > connected host has failed to respond 192.168.45.131:80< > http://192.168.45.131:80> > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > I would need some guidance to resolve these issues and help me deploy RHEL > Openstack. > > Kindly revert back if you need any further details. > > Regards > Teclus Dsouza > [Systems Engineer] > teclus13 at gmail.com > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 192.168.45.131_horizon.pp.log Type: application/octet-stream Size: 27619 bytes Desc: not available URL: From teclus13 at gmail.com Wed Dec 17 14:26:45 2014 From: teclus13 at gmail.com (Teclus Dsouza) Date: Wed, 17 Dec 2014 19:56:45 +0530 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) In-Reply-To: References: Message-ID: Arash, So this single line Installer can be run only with CentOS or even with Redhat7 . can you let point me to the location of where the answer file is located. Regards Teclus Dsouza On Wed, Dec 17, 2014 at 3:30 AM, Arash Kaffamanesh wrote: > > Hi Teclus, > > Perhaps this Single Line Installer might be of help: > http://cloudssky.com/en/blog/OpenStack-RDO-AIO-Single-Line-Installer/ > > And I guess on RHEL also you've to set CONFIG_CEILOMETER_INSTALL=n in your > answer file. > > Best, > Arash > > > On Tue, Dec 16, 2014 at 10:40 PM, St. George, Allan L. II < > ALLAN.L.ST.GEORGE at leidos.com> wrote: > >> Did you setup all the required repositories prior to installation? >> >> https://openstack.redhat.com/Repositories >> >> >> V/R, >> >> Allan L. St. George >> Integration Engineer >> Leidos >> ________________________________________ >> From: rdo-list-bounces at redhat.com [rdo-list-bounces at redhat.com] on >> behalf of Teclus Dsouza [teclus13 at gmail.com] >> Sent: Tuesday, December 16, 2014 1:13 PM >> To: rdo-list at redhat.com >> Cc: Rajagopalan Varadan >> Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using >> Packstack with RHEL 7.0 (Maipo) >> >> Hello Team, >> >> I was trying to setup Openstack on a Virtual Machine using the >> instructions from below url >> >> https://openstack.redhat.com/Quickstart >> >> But after the # packstack --allinone it gives Errors on some >> dependencies for which I am not able to find any solution on the >> workarounds URL. >> >> I have attached the log file >> >> [root at rhel-rdo ~]# uname -a >> Linux rhel-rdo.td.com 3.10.0-123.13.1.el7.x86_64 >> #1 SMP Tue Nov 4 10:16:51 EST 2 014 x86_64 >> x86_64 x86_64 GNU/Linux >> >> Also I am not able to connect to the URL http://192.168.45.131/dashboard >> and gives the following Error >> >> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >> [RocketTab] The socket connection to 192.168.45.131 failed. >> ErrorCode: 10060. >> A connection attempt failed because the connected party did not properly >> respond after a period of time, or established connection failed because >> connected host has failed to respond 192.168.45.131:80< >> http://192.168.45.131:80> >> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >> >> I would need some guidance to resolve these issues and help me deploy >> RHEL Openstack. >> >> Kindly revert back if you need any further details. >> >> Regards >> Teclus Dsouza >> [Systems Engineer] >> teclus13 at gmail.com >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Wed Dec 17 16:01:49 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 17 Dec 2014 17:01:49 +0100 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) In-Reply-To: References: Message-ID: <20141217160149.GA4155@turing.berg.ol> On Wed, Dec 17, 2014 at 07:49:30PM +0530, Teclus Dsouza wrote: > Hello George, > > Thanks this helped to get past the error but I am getting stuck when its > installing horizon.pp > > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > 192.168.45.131_horizon.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 192.168.45.131_horizon.pp > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-dashboard' > returned 1: Error downloading packages: > You will find full trace in log > /var/tmp/packstack/20141217-032323-VmUBXr/manifests/192.168.45.131_horizon.pp.log > Please check log file > /var/tmp/packstack/20141217-032323-VmUBXr/openstack-setup.log for more > information > >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> > > Any help on how to get this fixed. To get this debugged, please provide us either a log or (better): try on the host: yum install openstack-dashboard and provide us the trace. -- Matthias Runge From mrunge at redhat.com Wed Dec 17 16:05:36 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 17 Dec 2014 17:05:36 +0100 Subject: [Rdo-list] RDO meetup at FOSDEM? In-Reply-To: <20141216162528.GC9012@tesla.redhat.com> References: <549059AC.4080807@redhat.com> <20141216162528.GC9012@tesla.redhat.com> Message-ID: <20141217160536.GB4155@turing.berg.ol> On Tue, Dec 16, 2014 at 05:25:28PM +0100, Kashyap Chamarthy wrote: > > Will there be enough people at FOSDEM to try to do this again there? > > It should be a little easier to get a room than it was in Paris, so we > > shouldn't have to shout quite so loud to be heard. > I'll be at FOSDEM. > I'll be there, too. There is a IaaS devroom announced for Saturday and a Virtualization devroom for Sunday; Maybe Sunday is better to meet? Matthias -- Matthias Runge From mrunge at redhat.com Wed Dec 17 16:11:42 2014 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 17 Dec 2014 17:11:42 +0100 Subject: [Rdo-list] [rdo-list] Issue during setup of OpenStack using Packstack with RHEL 7.0 (Maipo) In-Reply-To: References: Message-ID: <20141217161142.GC4155@turing.berg.ol> On Wed, Dec 17, 2014 at 07:49:30PM +0530, Teclus Dsouza wrote: > Any help on how to get this fixed. > Ah, you attached the log file. It seems, there were issues with the package repository. Key information was: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-dashboard' returned 1: Error downloading packages: python-heatclient-0.2.12-2.el7.centos.noarch: [Errno 256] No more mirrors to try. openstack-dashboard-2014.2.1-1.el7.centos.noarch: [Errno 256] No more mirrors to try. Those packages are included in [1] Please retry. Matthias [1] https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ -- Matthias Runge From mdorman at godaddy.com Wed Dec 17 16:29:05 2014 From: mdorman at godaddy.com (Michael Dorman) Date: Wed, 17 Dec 2014 16:29:05 +0000 Subject: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <000001d018eb$47e14070$d7a3c150$@progbau.de> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> Message-ID: <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> Hi Chris, We haven?t yet gone to Juno, but in preparation for that we?ve been upgrading to CentOS 7. I have been using the CentOS Upgrade Tool, the process described here: http://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool It?s time consuming and causes 45-60min of downtime per server. But once it?s done it?s thing, it seems to be working well. Mike From: Chris > Date: Monday, December 15, 2014 at 9:46 PM To: "rdo-list at redhat.com" > Subject: [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 Hello We have an relatively big OpenStack setup (> 150 Compute Nodes) based on CentOS 6.5 with the RDO Icehouse release. We now considering an upgrade to Juno, is there a best practice out there how to do it and how is it with the depending CentOS upgrade to 6.6 or even 7.0? Any help is appreciated! Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristi.falcas at gmail.com Wed Dec 17 19:16:02 2014 From: cristi.falcas at gmail.com (Cristian Falcas) Date: Wed, 17 Dec 2014 21:16:02 +0200 Subject: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> Message-ID: Well, I used the same tool, but, like I said, after the upgrade I had to do a lot of cleanup because of mixed packages (2l6 and el7). I'm glad to see that it's working for you. On Wed, Dec 17, 2014 at 6:29 PM, Michael Dorman wrote: > Hi Chris, > > We haven?t yet gone to Juno, but in preparation for that we?ve been > upgrading to CentOS 7. I have been using the CentOS Upgrade Tool, the > process described here: > http://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool > > It?s time consuming and causes 45-60min of downtime per server. But once > it?s done it?s thing, it seems to be working well. > > Mike > > > From: Chris > Date: Monday, December 15, 2014 at 9:46 PM > To: "rdo-list at redhat.com" > Subject: [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 > > Hello > > > > We have an relatively big OpenStack setup (> 150 Compute Nodes) based on > CentOS 6.5 with the RDO Icehouse release. > > We now considering an upgrade to Juno, is there a best practice out there > how to do it and how is it with the depending CentOS upgrade to 6.6 or even > 7.0? > > > > Any help is appreciated! > > > > Thanks, > > Chris > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From rjones at redhat.com Wed Dec 17 19:39:34 2014 From: rjones at redhat.com (Richard W.M. Jones) Date: Wed, 17 Dec 2014 19:39:34 +0000 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? Message-ID: <20141217193934.GA20790@redhat.com> https://bugzilla.redhat.com/show_bug.cgi?id=1132129 It looks as if this was fixed in RHOS 5 and upstream (16a766d81) back in August. I've just cloned this bug for Rawhide where it is still not fixed: https://bugzilla.redhat.com/show_bug.cgi?id=1175460 Surely bugs should be fixed first upstream, then in Rawhide, and then in RHOS? Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW From mdorman at godaddy.com Wed Dec 17 19:49:30 2014 From: mdorman at godaddy.com (Michael Dorman) Date: Wed, 17 Dec 2014 19:49:30 +0000 Subject: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> Message-ID: <834B73A5-C914-4B72-B7CB-02C86FEAB030@godaddy.com> Yep, I had the same experience. I ended up building an ?after upgrade? script that did all the cleanup. It?s definitely not perfect. On 12/17/14, 7:16 PM, "Cristian Falcas" wrote: >Well, I used the same tool, but, like I said, after the upgrade I had >to do a lot of cleanup because of mixed packages (2l6 and el7). > >I'm glad to see that it's working for you. > > >On Wed, Dec 17, 2014 at 6:29 PM, Michael Dorman >wrote: >> Hi Chris, >> >> We haven?t yet gone to Juno, but in preparation for that we?ve been >> upgrading to CentOS 7. I have been using the CentOS Upgrade Tool, the >> process described here: >> http://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool >> >> It?s time consuming and causes 45-60min of downtime per server. But >>once >> it?s done it?s thing, it seems to be working well. >> >> Mike >> >> >> From: Chris >> Date: Monday, December 15, 2014 at 9:46 PM >> To: "rdo-list at redhat.com" >> Subject: [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 >> >> Hello >> >> >> >> We have an relatively big OpenStack setup (> 150 Compute Nodes) based on >> CentOS 6.5 with the RDO Icehouse release. >> >> We now considering an upgrade to Juno, is there a best practice out >>there >> how to do it and how is it with the depending CentOS upgrade to 6.6 or >>even >> 7.0? >> >> >> >> Any help is appreciated! >> >> >> >> Thanks, >> >> Chris >> >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> From kchamart at redhat.com Wed Dec 17 19:57:40 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 17 Dec 2014 20:57:40 +0100 Subject: [Rdo-list] RDO meetup at FOSDEM? In-Reply-To: <20141217160536.GB4155@turing.berg.ol> References: <549059AC.4080807@redhat.com> <20141216162528.GC9012@tesla.redhat.com> <20141217160536.GB4155@turing.berg.ol> Message-ID: <20141217195740.GA3420@tesla.redhat.com> On Wed, Dec 17, 2014 at 05:05:36PM +0100, Matthias Runge wrote: > On Tue, Dec 16, 2014 at 05:25:28PM +0100, Kashyap Chamarthy wrote: > > > Will there be enough people at FOSDEM to try to do this again there? > > > It should be a little easier to get a room than it was in Paris, so we > > > shouldn't have to shout quite so loud to be heard. > > > I'll be at FOSDEM. > > > I'll be there, too. > > There is a IaaS devroom announced for Saturday and a Virtualization > devroom for Sunday; Maybe Sunday is better to meet? Yeah. FWIW, I'd likely be spending most of my time in the Iaas and Virt dev rooms. -- /kashyap From rbowen at redhat.com Wed Dec 17 20:12:16 2014 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 17 Dec 2014 15:12:16 -0500 Subject: [Rdo-list] RDO meetup at FOSDEM? In-Reply-To: <20141217195740.GA3420@tesla.redhat.com> References: <549059AC.4080807@redhat.com> <20141216162528.GC9012@tesla.redhat.com> <20141217160536.GB4155@turing.berg.ol> <20141217195740.GA3420@tesla.redhat.com> Message-ID: <5491E3A0.3090304@redhat.com> On 12/17/2014 02:57 PM, Kashyap Chamarthy wrote: > On Wed, Dec 17, 2014 at 05:05:36PM +0100, Matthias Runge wrote: >> On Tue, Dec 16, 2014 at 05:25:28PM +0100, Kashyap Chamarthy wrote: >>>> Will there be enough people at FOSDEM to try to do this again there? >>>> It should be a little easier to get a room than it was in Paris, so we >>>> shouldn't have to shout quite so loud to be heard. >> >>> I'll be at FOSDEM. >>> >> I'll be there, too. >> >> There is a IaaS devroom announced for Saturday and a Virtualization >> devroom for Sunday; Maybe Sunday is better to meet? > > Yeah. FWIW, I'd likely be spending most of my time in the Iaas and Virt > dev rooms. > > I will gladly defer to people who have more FOSDEM experience than I as to when/where it's better to meet. For what it's worth, I asked on #fosdem, and they said that the hacker rooms are first come, first serve, register at the info desk onsite. So sounds pretty iffy. -- Rich Bowen - rbowen at redhat.com OpenStack Community Liaison http://openstack.redhat.com/ From kchamart at redhat.com Wed Dec 17 20:41:30 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 17 Dec 2014 21:41:30 +0100 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? In-Reply-To: <20141217193934.GA20790@redhat.com> References: <20141217193934.GA20790@redhat.com> Message-ID: <20141217204130.GC3420@tesla.redhat.com> On Wed, Dec 17, 2014 at 07:39:34PM +0000, Richard W.M. Jones wrote: > > https://bugzilla.redhat.com/show_bug.cgi?id=1132129 > > It looks as if this was fixed in RHOS 5 and upstream (16a766d81) back > in August. > > I've just cloned this bug for Rawhide where it is still not fixed: > > https://bugzilla.redhat.com/show_bug.cgi?id=1175460 > > Surely bugs should be fixed first upstream, then in Rawhide, and > then in RHOS? If I'd have to guess, it must have been just an innocuous mistake. (Added Ivan, who fixed the Packstack bug to the thread.) -- /kashyap From mail-lists at karan.org Wed Dec 17 21:35:49 2014 From: mail-lists at karan.org (Karanbir Singh) Date: Wed, 17 Dec 2014 21:35:49 +0000 Subject: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <834B73A5-C914-4B72-B7CB-02C86FEAB030@godaddy.com> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> <834B73A5-C914-4B72-B7CB-02C86FEAB030@godaddy.com> Message-ID: <5491F735.9090304@karan.org> On 17/12/14 19:49, Michael Dorman wrote: > Yep, I had the same experience. I ended up building an ?after upgrade? > script that did all the cleanup. It?s definitely not perfect. > share ? - KB -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc From sgordon at redhat.com Wed Dec 17 21:43:40 2014 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 17 Dec 2014 16:43:40 -0500 (EST) Subject: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <045f01d01917$01a70690$04f513b0$@progbau.de> Message-ID: <973845264.326567.1418852620138.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Cristian Falcas" > To: "Chris" > > You should move them to other compute nodes: > > http://docs.openstack.org/openstack-ops/content/maintenance.html In relation to this, something to watch out for at the moment: https://bugs.launchpad.net/nova/+bug/1402813 > On Tue, Dec 16, 2014 at 11:59 AM, Chris wrote: > > Hi, > > > > What about the Instances running on the Compute Nodes? It's totally not an > > option to "lose" the existing Instances. > > > > -----Original Message----- > > From: Cristian Falcas [mailto:cristi.falcas at gmail.com] > > Sent: Tuesday, December 16, 2014 14:47 > > To: Chris > > Cc: rdo-list at redhat.com > > Subject: Re: [Rdo-list] Upgrade from Icehouse to Juno on Centos 6.5 > > > > I think it will be better to redeploy the compute nodes from scratch. > > Juno is supported on el7 only. > > > > From my experience with upgrading from 6 to 7, it's a lot of hassle, > > because the upgrade process only takes care of base the programs. > > After the upgrade you will have to upgrade/downgrade/remove manually the > > packages that where not updated. > > > > Best regards, > > Cristian Falcas > > > > On Tue, Dec 16, 2014 at 6:46 AM, Chris wrote: > >> Hello > >> > >> > >> > >> We have an relatively big OpenStack setup (> 150 Compute Nodes) based > >> on CentOS 6.5 with the RDO Icehouse release. > >> > >> We now considering an upgrade to Juno, is there a best practice out > >> there how to do it and how is it with the depending CentOS upgrade to > >> 6.6 or even 7.0? > >> > >> > >> > >> Any help is appreciated! > >> > >> > >> > >> Thanks, > >> > >> Chris > >> > >> > >> > >> > >> > >> > >> _______________________________________________ > >> Rdo-list mailing list > >> Rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -- Steve Gordon, RHCE Sr. Technical Product Manager, Red Hat Enterprise Linux OpenStack Platform From contact at progbau.de Thu Dec 18 03:28:51 2014 From: contact at progbau.de (Chris) Date: Thu, 18 Dec 2014 10:28:51 +0700 Subject: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <834B73A5-C914-4B72-B7CB-02C86FEAB030@godaddy.com> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> <834B73A5-C914-4B72-B7CB-02C86FEAB030@godaddy.com> Message-ID: <005301d01a72$c0fb1a20$42f14e60$@progbau.de> Hi Thanks all for your input so far! So for me understanding things correct, Icehouse is running on CentOS7? So I can upgrade the OS beforehand, without touching the OpenStack installation. Is it possible to have a part of the compute nodes and the management services (keystone, nova, neutron etc.) in different versions (Icehouse/Juno)? We would start the upgrade on the compute nodes but of course not all at the same time, are the updated ones still functional? Cheers Chris -----Original Message----- From: Michael Dorman [mailto:mdorman at godaddy.com] Sent: Thursday, December 18, 2014 02:50 To: Cristian Falcas Cc: Chris; rdo-list at redhat.com Subject: Re: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 Yep, I had the same experience. I ended up building an ?after upgrade? script that did all the cleanup. It?s definitely not perfect. On 12/17/14, 7:16 PM, "Cristian Falcas" wrote: >Well, I used the same tool, but, like I said, after the upgrade I had >to do a lot of cleanup because of mixed packages (2l6 and el7). > >I'm glad to see that it's working for you. > > >On Wed, Dec 17, 2014 at 6:29 PM, Michael Dorman >wrote: >> Hi Chris, >> >> We haven?t yet gone to Juno, but in preparation for that we?ve been >> upgrading to CentOS 7. I have been using the CentOS Upgrade Tool, >> the process described here: >> http://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool >> >> It?s time consuming and causes 45-60min of downtime per server. But >>once >> it?s done it?s thing, it seems to be working well. >> >> Mike >> >> >> From: Chris >> Date: Monday, December 15, 2014 at 9:46 PM >> To: "rdo-list at redhat.com" >> Subject: [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 >> >> Hello >> >> >> >> We have an relatively big OpenStack setup (> 150 Compute Nodes) based >> on CentOS 6.5 with the RDO Icehouse release. >> >> We now considering an upgrade to Juno, is there a best practice out >>there how to do it and how is it with the depending CentOS upgrade to >>6.6 or even 7.0? >> >> >> >> Any help is appreciated! >> >> >> >> Thanks, >> >> Chris >> >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> From mdorman at godaddy.com Thu Dec 18 04:29:26 2014 From: mdorman at godaddy.com (Michael Dorman) Date: Thu, 18 Dec 2014 04:29:26 +0000 Subject: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 In-Reply-To: <005301d01a72$c0fb1a20$42f14e60$@progbau.de> References: <000001d018eb$47e14070$d7a3c150$@progbau.de> <87060BC9-0047-4838-B6F0-E7A20E079B41@godaddy.com> <834B73A5-C914-4B72-B7CB-02C86FEAB030@godaddy.com> <005301d01a72$c0fb1a20$42f14e60$@progbau.de> Message-ID: <15582C55-4E57-4A10-BFC9-5D50F11787FF@godaddy.com> We?re running Icehouse on CentOS7, yes. But we had to rebuild all the OS packages for el7 (we do our own packaging because we maintain some local packages. Using Anvil and based loosely on the RDO builds.) All that stuff has to be uplifted, because CentOS7 is Python 2.7, where as 6 is on 2.6. I can?t speak directly about mixed versions across the different services, but I believe it?s possible. I know of people running Juno keystone with everything else Icehouse, for example. Mike On 12/18/14, 3:28 AM, "Chris" wrote: >Hi > >Thanks all for your input so far! > >So for me understanding things correct, Icehouse is running on CentOS7? >So I can upgrade the OS beforehand, without touching the OpenStack >installation. >Is it possible to have a part of the compute nodes and the management >services (keystone, nova, neutron etc.) in different versions >(Icehouse/Juno)? >We would start the upgrade on the compute nodes but of course not all at >the same time, are the updated ones still functional? > >Cheers >Chris > >-----Original Message----- >From: Michael Dorman [mailto:mdorman at godaddy.com] >Sent: Thursday, December 18, 2014 02:50 >To: Cristian Falcas >Cc: Chris; rdo-list at redhat.com >Subject: Re: [Rdo-list] [Openstack] Upgrade from Icehouse to Juno on >Centos 6.5 > >Yep, I had the same experience. I ended up building an ?after upgrade? >script that did all the cleanup. It?s definitely not perfect. > > > > >On 12/17/14, 7:16 PM, "Cristian Falcas" wrote: > >>Well, I used the same tool, but, like I said, after the upgrade I had >>to do a lot of cleanup because of mixed packages (2l6 and el7). >> >>I'm glad to see that it's working for you. >> >> >>On Wed, Dec 17, 2014 at 6:29 PM, Michael Dorman >>wrote: >>> Hi Chris, >>> >>> We haven?t yet gone to Juno, but in preparation for that we?ve been >>> upgrading to CentOS 7. I have been using the CentOS Upgrade Tool, >>> the process described here: >>> http://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool >>> >>> It?s time consuming and causes 45-60min of downtime per server. But >>>once >>> it?s done it?s thing, it seems to be working well. >>> >>> Mike >>> >>> >>> From: Chris >>> Date: Monday, December 15, 2014 at 9:46 PM >>> To: "rdo-list at redhat.com" >>> Subject: [Openstack] Upgrade from Icehouse to Juno on Centos 6.5 >>> >>> Hello >>> >>> >>> >>> We have an relatively big OpenStack setup (> 150 Compute Nodes) based >>> on CentOS 6.5 with the RDO Icehouse release. >>> >>> We now considering an upgrade to Juno, is there a best practice out >>>there how to do it and how is it with the depending CentOS upgrade to >>>6.6 or even 7.0? >>> >>> >>> >>> Any help is appreciated! >>> >>> >>> >>> Thanks, >>> >>> Chris >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Rdo-list mailing list >>> Rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> > From rjones at redhat.com Thu Dec 18 09:16:17 2014 From: rjones at redhat.com (Richard W.M. Jones) Date: Thu, 18 Dec 2014 09:16:17 +0000 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? In-Reply-To: <20141217193934.GA20790@redhat.com> References: <20141217193934.GA20790@redhat.com> Message-ID: <20141218091617.GB20790@redhat.com> On Wed, Dec 17, 2014 at 07:39:34PM +0000, Richard W.M. Jones wrote: > > https://bugzilla.redhat.com/show_bug.cgi?id=1132129 > > It looks as if this was fixed in RHOS 5 and upstream (16a766d81) back > in August. > > I've just cloned this bug for Rawhide where it is still not fixed: > > https://bugzilla.redhat.com/show_bug.cgi?id=1175460 > > Surely bugs should be fixed first upstream, then in Rawhide, and > then in RHOS? It turns out the commit fixing the bug was reverted upstream. The bug still happens in a freshly created Rawhide VM that just runs 'packstack --allinone'. Any idea who/what it is that adds net.bridge.bridge-nf-call-* rules into /etc/sysctl.conf? I would guess it's something to do with libvirt. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v From kchamart at redhat.com Thu Dec 18 11:19:10 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 18 Dec 2014 12:19:10 +0100 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? In-Reply-To: <20141218091617.GB20790@redhat.com> References: <20141217193934.GA20790@redhat.com> <20141218091617.GB20790@redhat.com> Message-ID: <20141218111910.GA13316@tesla.redhat.com> On Thu, Dec 18, 2014 at 09:16:17AM +0000, Richard W.M. Jones wrote: > On Wed, Dec 17, 2014 at 07:39:34PM +0000, Richard W.M. Jones wrote: > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1132129 > > > > It looks as if this was fixed in RHOS 5 and upstream (16a766d81) back > > in August. > > > > I've just cloned this bug for Rawhide where it is still not fixed: > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1175460 > > > > Surely bugs should be fixed first upstream, then in Rawhide, and > > then in RHOS? > > It turns out the commit fixing the bug was reverted upstream. > > The bug still happens in a freshly created Rawhide VM that just runs > 'packstack --allinone'. Any idea who/what it is that adds > net.bridge.bridge-nf-call-* rules into /etc/sysctl.conf? Looking up Bugzilla, seems like it's needed to get Neutron networking security groups working correctly, this is the bug https://bugzilla.redhat.com/show_bug.cgi?id=981144 -- need to set net.bridge.bridge-nf-call-iptables=1 for --allinone installation which says For the single node deployment with "packstack --allinone", following kernel parms should be set so that the security group works correctly. net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 -- /kashyap From rjones at redhat.com Thu Dec 18 11:32:21 2014 From: rjones at redhat.com (Richard W.M. Jones) Date: Thu, 18 Dec 2014 11:32:21 +0000 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? In-Reply-To: <20141218111910.GA13316@tesla.redhat.com> References: <20141217193934.GA20790@redhat.com> <20141218091617.GB20790@redhat.com> <20141218111910.GA13316@tesla.redhat.com> Message-ID: <20141218113221.GK11603@redhat.com> On Thu, Dec 18, 2014 at 12:19:10PM +0100, Kashyap Chamarthy wrote: > On Thu, Dec 18, 2014 at 09:16:17AM +0000, Richard W.M. Jones wrote: > > On Wed, Dec 17, 2014 at 07:39:34PM +0000, Richard W.M. Jones wrote: > > > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1132129 > > > > > > It looks as if this was fixed in RHOS 5 and upstream (16a766d81) back > > > in August. > > > > > > I've just cloned this bug for Rawhide where it is still not fixed: > > > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1175460 > > > > > > Surely bugs should be fixed first upstream, then in Rawhide, and > > > then in RHOS? > > > > It turns out the commit fixing the bug was reverted upstream. > > > > The bug still happens in a freshly created Rawhide VM that just runs > > 'packstack --allinone'. Any idea who/what it is that adds > > net.bridge.bridge-nf-call-* rules into /etc/sysctl.conf? > > Looking up Bugzilla, seems like it's needed to get Neutron networking > security groups working correctly, this is the bug > > https://bugzilla.redhat.com/show_bug.cgi?id=981144 -- need to set > net.bridge.bridge-nf-call-iptables=1 for --allinone installation > > which says > > For the single node deployment with "packstack --allinone", > following kernel parms should be set so that the security group > works correctly. > > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > net.bridge.bridge-nf-call-arptables = 1 I believe the underlying problem is that 'br_netfilter' (a kernel module) is not getting loaded. This module is what creates /proc/sys/net/bridge/bridge-nf-* files. If I load the module manually before running packstack then I can get around this problem. There are a few possibilities here: - Because I'm starting from @Core (ie. a minimal package set), it could be that some other program that would normally be installed and which would load this module is not installed. ie. A missing dependency. - Something in Rawhide previously loaded/required this module, but now doesn't. - Something specific to aarch64 (this one seems unlikely). On a similar topic, here is a another bug which causes me some concern about the state of RDO in Rawhide: https://bugzilla.redhat.com/show_bug.cgi?id=1175472 Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW From kchamart at redhat.com Thu Dec 18 12:28:30 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Thu, 18 Dec 2014 13:28:30 +0100 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? In-Reply-To: <20141218113221.GK11603@redhat.com> References: <20141217193934.GA20790@redhat.com> <20141218091617.GB20790@redhat.com> <20141218111910.GA13316@tesla.redhat.com> <20141218113221.GK11603@redhat.com> Message-ID: <20141218122830.GB13316@tesla.redhat.com> On Thu, Dec 18, 2014 at 11:32:21AM +0000, Richard W.M. Jones wrote: > On Thu, Dec 18, 2014 at 12:19:10PM +0100, Kashyap Chamarthy wrote: [. . .] > > > The bug still happens in a freshly created Rawhide VM that just runs > > > 'packstack --allinone'. Any idea who/what it is that adds > > > net.bridge.bridge-nf-call-* rules into /etc/sysctl.conf? > > > > Looking up Bugzilla, seems like it's needed to get Neutron networking > > security groups working correctly, this is the bug > > > > https://bugzilla.redhat.com/show_bug.cgi?id=981144 -- need to set > > net.bridge.bridge-nf-call-iptables=1 for --allinone installation > > > > which says > > > > For the single node deployment with "packstack --allinone", > > following kernel parms should be set so that the security group > > works correctly. > > > > net.bridge.bridge-nf-call-ip6tables = 1 > > net.bridge.bridge-nf-call-iptables = 1 > > net.bridge.bridge-nf-call-arptables = 1 > > I believe the underlying problem is that 'br_netfilter' (a kernel > module) is not getting loaded. This module is what creates > /proc/sys/net/bridge/bridge-nf-* files. > > If I load the module manually before running packstack then I can get > around this problem. > > There are a few possibilities here: > > - Because I'm starting from @Core (ie. a minimal package set), it > could be that some other program that would normally be installed > and which would load this module is not installed. ie. A missing > dependency. > > - Something in Rawhide previously loaded/required this module, but > now doesn't. > > - Something specific to aarch64 (this one seems unlikely). > > On a similar topic, here is a another bug which causes me some concern > about the state of RDO in Rawhide: > > https://bugzilla.redhat.com/show_bug.cgi?id=1175472 (Just to update others reading the thread). This is being discussed on IRC, Flavio (Glance developer) says it's possibly a 'failed upgrade'. -- /kashyap From rjones at redhat.com Thu Dec 18 12:30:50 2014 From: rjones at redhat.com (Richard W.M. Jones) Date: Thu, 18 Dec 2014 12:30:50 +0000 Subject: [Rdo-list] Why is a bug fixed in RHOS/RHEL but not in Rawhide? In-Reply-To: <20141218122830.GB13316@tesla.redhat.com> References: <20141217193934.GA20790@redhat.com> <20141218091617.GB20790@redhat.com> <20141218111910.GA13316@tesla.redhat.com> <20141218113221.GK11603@redhat.com> <20141218122830.GB13316@tesla.redhat.com> Message-ID: <20141218123050.GM11603@redhat.com> On Thu, Dec 18, 2014 at 01:28:30PM +0100, Kashyap Chamarthy wrote: > This is being discussed on IRC, Flavio (Glance developer) says it's > possibly a 'failed upgrade'. It's a fresh install in a brand new VM, so it's nothing to do with upgrading. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html From dmitry at athabascau.ca Fri Dec 19 18:08:15 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Fri, 19 Dec 2014 11:08:15 -0700 Subject: [Rdo-list] dnsmasq: failed to set SO_REUSE{ADDR|PORT} on DHCP socket: Protocol not available In-Reply-To: <54904385.9080602@athabascau.ca> References: <548A1BFC.4020102@athabascau.ca> <548A3DD0.7040509@athabascau.ca> <548AE5A3.50204@redhat.com> <54904385.9080602@athabascau.ca> Message-ID: <5494698F.5060600@athabascau.ca> On 12/16/2014 07:36 AM, Dmitry Makovey wrote: > On 12/12/2014 05:54 AM, Ihar Hrachyshka wrote: >>>> My current dnsmasq set is: >>>> >>>> # rpm -qa | grep dnsmasq dnsmasq-utils-2.48-14.el6.x86_64 >>>> dnsmasq-2.48-14.el6.x86_64 >> >>> after downgrading packages to 2.48-13 and restarting services looks >>> like things are back under control... >> >> This sounds like a bug. Can you report it? > > sure thing - I'll be re-installing our environment shortly which will > give me a chance to reproduce the behaviour. Once confirmed I'll file > the bug. confirmed: on fresh install, after pulling in all the dependencies I had to downgrade to dnsmasq-2.48-13.el6.x86_64 to get rid of the error message in Subject line... https://bugzilla.redhat.com/show_bug.cgi?id=1176224 -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From patrick at laimbock.com Sat Dec 20 16:09:23 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Sat, 20 Dec 2014 17:09:23 +0100 Subject: [Rdo-list] Kilo-1 RDO packages? Message-ID: <54959F33.1020500@laimbock.com> Hi, Are there any plans to publish kilo-1 RDO packages? http://lists.openstack.org/pipermail/openstack-announce/2014-December/000313.html Happy Holidays! Best, Patrick From kchamart at redhat.com Sat Dec 20 18:07:25 2014 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 20 Dec 2014 19:07:25 +0100 Subject: [Rdo-list] Kilo-1 RDO packages? In-Reply-To: <54959F33.1020500@laimbock.com> References: <54959F33.1020500@laimbock.com> Message-ID: <20141220180725.GA5960@tesla.redhat.com> On Sat, Dec 20, 2014 at 05:09:23PM +0100, Patrick Laimbock wrote: > Hi, Hi Patrick, > Are there any plans to publish kilo-1 RDO packages? I'm sure there are plans. :-) > http://lists.openstack.org/pipermail/openstack-announce/2014-December/000313.html As we can see, it was just released yesterday, so I doubt RPM packages will be available early next week, given the holiday season. But when the packages are available, it should be annouced on this list. > Happy Holidays! Likewise. -- /kashyap From ihrachys at redhat.com Mon Dec 22 08:51:20 2014 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 22 Dec 2014 09:51:20 +0100 Subject: [Rdo-list] Kilo-1 RDO? Message-ID: <5497DB88.9030002@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi Alan et al., now that Kilo-1 milestone release is tagged in git trees, I wonder whether we are going to provide some RDO builds based on top of it, or we're only providing Delorean (master tracking) nightlies at this point. There seems to be no clear way on how to provide Kilo-1 builds for Fedora, since Rawhide is currently based on Juno, so we depend on Rawhide being branched into f22 before we're able to rebase Rawhide to Kilo builds. Does it mean that we'll wait with RDO Kilo builds till that moment (which is scheduled "no earlier than 2015-02-10", as per [1])? [1]: https://fedoraproject.org/wiki/Releases/22/Schedule /Ihar -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQEcBAEBCgAGBQJUl9uIAAoJEC5aWaUY1u577LEIAIm5ADH4dyKyL17sMdw0lrPn 6n0h8/9tTDtUWTHyqk2Vvc4xdapvP1YyCLezO0iUPXgt859j42bMfoxj1f9BDL4s H34x0oQdMCIlap/cXPfJ6o/R8ztixs2KBviWUKA37j6zwuYW0Za7jSztBBc1nRev K4eJxwD67WdDdky4HMT/zhOlspDt7+SSaphadlq9hA82E08pRgurs+0oZ9s+9OYh AzOwI6vmyPPov5GIx8od/xWH9QuwmAgfZUNAbg1oDfMZAmxxFQxdy4+Tk1AiCUgZ fm+mvsGVosNMQaipmSMLx+Zm+/cCP5Bb3bNIf70IdNF47Bt7wePtvFJLD15qSXs= =dZkX -----END PGP SIGNATURE----- From apevec at gmail.com Mon Dec 22 10:18:27 2014 From: apevec at gmail.com (Alan Pevec) Date: Mon, 22 Dec 2014 11:18:27 +0100 Subject: [Rdo-list] Kilo-1 RDO? In-Reply-To: <5497DB88.9030002@redhat.com> References: <5497DB88.9030002@redhat.com> Message-ID: > whether we are going to provide some RDO builds based on top of it, or > we're only providing Delorean (master tracking) nightlies at this point. Delorean is were Kilo work for RPM packages is happening and there isn't plan to create RDO Kilo repo before Kilo RC phase. There's a work in progress to make Packstack working against Delorean repos and, once we have that, CI jobs with Packstack against Delorean repos. Delorean snapshots which pass CI will be then published under rdo.fedorapeople.org/openstack-trunk Cheers, Alan From dmitry at athabascau.ca Mon Dec 22 23:10:02 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Mon, 22 Dec 2014 16:10:02 -0700 Subject: [Rdo-list] cinder speed (slow nova?) Message-ID: <5498A4CA.7030100@athabascau.ca> Hi everybody, using RDO IceHouse packages I've set up an infrastructure atop of RHEL6.6 and am seeing a very unpleasant performance for the storage. I've done some testing and here's what I get from the same storage: but different access points: cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s instance # dd if=/dev/zero of=baloon bs=1048576 count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s A bit of explanation: in above scenario I have created LV on cinder-node, then mounted it locally and ran command for "cinder-volume". Created an iSCSI target, mounted it on nova-compute, and ran command there. Then, via cinder created storage volume, booted the OS off it, and ran test from within it... Results are just miserable. going from 1.2G/s down to 20M/s seems to be a big degradation. What should I look for? I have also tried running the same command within our RHEL KVM instance and got great performance. I have checked under /var/lib/nova/instances/* and libvirt.xml seems to indicate that virtio is being employed: 955b25eb-bb48-43c3-a14d-222c9e8c7019 guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2 downloaded off RH site. -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From dmitry at athabascau.ca Mon Dec 22 23:13:27 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Mon, 22 Dec 2014 16:13:27 -0700 Subject: [Rdo-list] cinder speed (slow nova?) In-Reply-To: <5498A4CA.7030100@athabascau.ca> References: <5498A4CA.7030100@athabascau.ca> Message-ID: <5498A597.2090806@athabascau.ca> note that all of below applies when cirros used as guest... On 12/22/2014 04:10 PM, Dmitry Makovey wrote: > Hi everybody, > > using RDO IceHouse packages I've set up an infrastructure atop of > RHEL6.6 and am seeing a very unpleasant performance for the storage. > > I've done some testing and here's what I get from the same storage: but > different access points: > > cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200 > 200+0 records in > 200+0 records out > 209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s > > nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200 > 200+0 records in > 200+0 records out > 209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s > > instance # dd if=/dev/zero of=baloon bs=1048576 count=200 > 200+0 records in > 200+0 records out > 209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s > > A bit of explanation: in above scenario I have created LV on > cinder-node, then mounted it locally and ran command for > "cinder-volume". Created an iSCSI target, mounted it on nova-compute, > and ran command there. Then, via cinder created storage volume, booted > the OS off it, and ran test from within it... Results are just > miserable. going from 1.2G/s down to 20M/s seems to be a big > degradation. What should I look for? I have also tried running the same > command within our RHEL KVM instance and got great performance. > > I have checked under /var/lib/nova/instances/* and libvirt.xml seems to > indicate that virtio is being employed: > > > > dev="/dev/disk/by-path/ip-192.168.46.18:3260-iscsi-iqn.2010-10.org.openstack:volume-955b25eb-bb48-43c3-a14d-222c9e8c7019-lun-1"/> > > 955b25eb-bb48-43c3-a14d-222c9e8c7019 > > > guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2 downloaded > off RH site. -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From cristi.falcas at gmail.com Mon Dec 22 23:30:18 2014 From: cristi.falcas at gmail.com (Cristian Falcas) Date: Tue, 23 Dec 2014 01:30:18 +0200 Subject: [Rdo-list] cinder speed (slow nova?) In-Reply-To: <5498A597.2090806@athabascau.ca> References: <5498A4CA.7030100@athabascau.ca> <5498A597.2090806@athabascau.ca> Message-ID: Nova has as default cache for disks a very safe value (I think file=directsync or writethrough). Try to change it to writeback: disk_cachemodes="file=writeback" On Tue, Dec 23, 2014 at 1:13 AM, Dmitry Makovey wrote: > note that all of below applies when cirros used as guest... > > On 12/22/2014 04:10 PM, Dmitry Makovey wrote: >> Hi everybody, >> >> using RDO IceHouse packages I've set up an infrastructure atop of >> RHEL6.6 and am seeing a very unpleasant performance for the storage. >> >> I've done some testing and here's what I get from the same storage: but >> different access points: >> >> cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200 >> 200+0 records in >> 200+0 records out >> 209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s >> >> nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200 >> 200+0 records in >> 200+0 records out >> 209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s >> >> instance # dd if=/dev/zero of=baloon bs=1048576 count=200 >> 200+0 records in >> 200+0 records out >> 209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s >> >> A bit of explanation: in above scenario I have created LV on >> cinder-node, then mounted it locally and ran command for >> "cinder-volume". Created an iSCSI target, mounted it on nova-compute, >> and ran command there. Then, via cinder created storage volume, booted >> the OS off it, and ran test from within it... Results are just >> miserable. going from 1.2G/s down to 20M/s seems to be a big >> degradation. What should I look for? I have also tried running the same >> command within our RHEL KVM instance and got great performance. >> >> I have checked under /var/lib/nova/instances/* and libvirt.xml seems to >> indicate that virtio is being employed: >> >> >> >> > dev="/dev/disk/by-path/ip-192.168.46.18:3260-iscsi-iqn.2010-10.org.openstack:volume-955b25eb-bb48-43c3-a14d-222c9e8c7019-lun-1"/> >> >> 955b25eb-bb48-43c3-a14d-222c9e8c7019 >> >> >> guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2 downloaded >> off RH site. > > > > -- > Dmitry Makovey > Web Systems Administrator > Athabasca University > (780) 675-6245 > --- > Confidence is what you have before you understand the problem > Woody Allen > > When in trouble when in doubt run in circles scream and shout > http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > From Yaniv.Kaul at emc.com Tue Dec 23 06:59:10 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 23 Dec 2014 01:59:10 -0500 Subject: [Rdo-list] cinder speed (slow nova?) In-Reply-To: <5498A4CA.7030100@athabascau.ca> References: <5498A4CA.7030100@athabascau.ca> Message-ID: <648473255763364B961A02AC3BE1060D03C9B1626E@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Dmitry Makovey > Sent: Tuesday, December 23, 2014 1:10 AM > To: rdo-list at redhat.com > Subject: [Rdo-list] cinder speed (slow nova?) > > Hi everybody, > > using RDO IceHouse packages I've set up an infrastructure atop of > RHEL6.6 and am seeing a very unpleasant performance for the storage. > > I've done some testing and here's what I get from the same storage: but > different access points: > > cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200 > 200+0 records in > 200+0 records out > 209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s You are testing the performance with the cache. Use direct IO, to bypass it. Y. > > nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200 > 200+0 records in > 200+0 records out > 209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s > > instance # dd if=/dev/zero of=baloon bs=1048576 count=200 > 200+0 records in > 200+0 records out > 209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s > > A bit of explanation: in above scenario I have created LV on cinder-node, then > mounted it locally and ran command for "cinder-volume". Created an iSCSI > target, mounted it on nova-compute, and ran command there. Then, via cinder > created storage volume, booted the OS off it, and ran test from within it... > Results are just miserable. going from 1.2G/s down to 20M/s seems to be a big > degradation. What should I look for? I have also tried running the same > command within our RHEL KVM instance and got great performance. > > I have checked under /var/lib/nova/instances/* and libvirt.xml seems to > indicate that virtio is being employed: > > > > dev="/dev/disk/by-path/ip-192.168.46.18:3260-iscsi-iqn.2010- > 10.org.openstack:volume-955b25eb-bb48-43c3-a14d-222c9e8c7019-lun-1"/> > > 955b25eb-bb48-43c3-a14d-222c9e8c7019 > > > guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2 downloaded off > RH site. > > -- > Dmitry Makovey > Web Systems Administrator > Athabasca University > (780) 675-6245 > --- > Confidence is what you have before you understand the problem > Woody Allen > > When in trouble when in doubt run in circles scream and shout > http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 From Yaniv.Kaul at emc.com Tue Dec 23 07:01:07 2014 From: Yaniv.Kaul at emc.com (Kaul, Yaniv) Date: Tue, 23 Dec 2014 02:01:07 -0500 Subject: [Rdo-list] cinder speed (slow nova?) In-Reply-To: References: <5498A4CA.7030100@athabascau.ca> <5498A597.2090806@athabascau.ca> Message-ID: <648473255763364B961A02AC3BE1060D03C9B1626F@MX19A.corp.emc.com> > -----Original Message----- > From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On > Behalf Of Cristian Falcas > Sent: Tuesday, December 23, 2014 1:30 AM > To: Dmitry Makovey > Cc: rdo-list > Subject: Re: [Rdo-list] cinder speed (slow nova?) > > Nova has as default cache for disks a very safe value (I think file=directsync or > writethrough). Try to change it to writeback: > > disk_cachemodes="file=writeback" Better safe than sorry. You risk losing data unless you have a battery backed up storage. Y. > > On Tue, Dec 23, 2014 at 1:13 AM, Dmitry Makovey > wrote: > > note that all of below applies when cirros used as guest... > > > > On 12/22/2014 04:10 PM, Dmitry Makovey wrote: > >> Hi everybody, > >> > >> using RDO IceHouse packages I've set up an infrastructure atop of > >> RHEL6.6 and am seeing a very unpleasant performance for the storage. > >> > >> I've done some testing and here's what I get from the same storage: > >> but different access points: > >> > >> cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200 > >> 200+0 records in > >> 200+0 records out > >> 209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s > >> > >> nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200 > >> 200+0 records in > >> 200+0 records out > >> 209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s > >> > >> instance # dd if=/dev/zero of=baloon bs=1048576 count=200 > >> 200+0 records in > >> 200+0 records out > >> 209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s > >> > >> A bit of explanation: in above scenario I have created LV on > >> cinder-node, then mounted it locally and ran command for > >> "cinder-volume". Created an iSCSI target, mounted it on nova-compute, > >> and ran command there. Then, via cinder created storage volume, > >> booted the OS off it, and ran test from within it... Results are just > >> miserable. going from 1.2G/s down to 20M/s seems to be a big > >> degradation. What should I look for? I have also tried running the > >> same command within our RHEL KVM instance and got great performance. > >> > >> I have checked under /var/lib/nova/instances/* and libvirt.xml seems > >> to indicate that virtio is being employed: > >> > >> > >> > >> >> dev="/dev/disk/by-path/ip-192.168.46.18:3260-iscsi-iqn.2010- > 10.org.openstack:volume-955b25eb-bb48-43c3-a14d-222c9e8c7019-lun-1"/> > >> > >> 955b25eb-bb48-43c3-a14d-222c9e8c7019 > >> > >> > >> guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2 > >> downloaded off RH site. > > > > > > > > -- > > Dmitry Makovey > > Web Systems Administrator > > Athabasca University > > (780) 675-6245 > > --- > > Confidence is what you have before you understand the problem > > Woody Allen > > > > When in trouble when in doubt run in circles scream and shout > > http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 > > > > > > _______________________________________________ > > Rdo-list mailing list > > Rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From David.Krovich at mail.wvu.edu Tue Dec 23 20:56:33 2014 From: David.Krovich at mail.wvu.edu (David Krovich) Date: Tue, 23 Dec 2014 20:56:33 +0000 Subject: [Rdo-list] Single Node Openstack Message-ID: <1419368192946.78002@mail.wvu.edu> Hi, I'm trying to learn about how to setup and configure OpenStack. I've got a laptop that I want to use a test machine to run a single OpenStack node with instances appearing on the same network as the node itself. I'm trying to follow the instructions from this web site. https://openstack.redhat.com/Neutron_with_existing_external_network I'm running Fedora 20 on this laptop. My network range is 192.168.5.0/24. First question, does anyone have a similar setup? Fedora 20, single node, instances on the same network? I can get openstack installed via packstack and everything appears to work except that I can't seem to talk to the instances over the network. At this point I'm stuck and could use some advise on where to look further. Thanks. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry at athabascau.ca Tue Dec 23 21:05:24 2014 From: dmitry at athabascau.ca (Dmitry Makovey) Date: Tue, 23 Dec 2014 14:05:24 -0700 Subject: [Rdo-list] [SOLVED]: cinder speed (slow nova?) In-Reply-To: <5498A597.2090806@athabascau.ca> References: <5498A4CA.7030100@athabascau.ca> <5498A597.2090806@athabascau.ca> Message-ID: <5499D914.2020904@athabascau.ca> On 12/22/2014 04:13 PM, Dmitry Makovey wrote: > note that all of below applies when cirros used as guest... posting here for posterity: turned out I've had a bug in nova.conf template which resulted in virt_type set to qemu instead of kvm. Fixing that gave the performance I've been expecting from the platform. -- Dmitry Makovey Web Systems Administrator Athabasca University (780) 675-6245 --- Confidence is what you have before you understand the problem Woody Allen When in trouble when in doubt run in circles scream and shout http://www.wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: OpenPGP digital signature URL: From David.Krovich at mail.wvu.edu Wed Dec 24 00:59:22 2014 From: David.Krovich at mail.wvu.edu (David Krovich) Date: Wed, 24 Dec 2014 00:59:22 +0000 Subject: [Rdo-list] Single Node Openstack In-Reply-To: <1419368192946.78002@mail.wvu.edu> References: <1419368192946.78002@mail.wvu.edu> Message-ID: <1419382760998.16431@mail.wvu.edu> Adding more information. ONBOOT=yes[root at localhost ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: p5p1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:22:41:28:14:20 brd ff:ff:ff:ff:ff:ff inet 192.168.5.151/24 brd 192.168.5.255 scope global dynamic p5p1 valid_lft 85871sec preferred_lft 85871sec inet6 fe80::222:41ff:fe28:1420/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: mtu 1500 qdisc noop state DOWN group default link/ether 22:4a:7f:81:49:15 brd ff:ff:ff:ff:ff:ff 4: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 32:1a:96:7a:7e:4a brd ff:ff:ff:ff:ff:ff inet 192.168.5.151/24 brd 192.168.5.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fe80::301a:96ff:fe7a:7e4a/64 scope link valid_lft forever preferred_lft forever 8: br-int: mtu 1500 qdisc noop state DOWN group default link/ether 32:99:19:54:f9:40 brd ff:ff:ff:ff:ff:ff 10: br-tun: mtu 1500 qdisc noop state DOWN group default link/ether 76:49:ac:a6:ce:4f brd ff:ff:ff:ff:ff:ff /etc/sysconfig/network-scripts/ifcfg-br-ex [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.5.151 NETMASK=255.255.255.0 ONBOOT=yes [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-p5p1 TYPE="OVSPort" DEVICETYPE="ovs" OVS_BRIDGE="br-ex" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" NAME="p5p1" UUID="70997a7b-a01c-48a6-b961-b11304839108" ONBOOT="yes" HWADDR="00:22:41:28:14:20" PEERDNS="yes" PEERROUTES="yes" Ran the following: [root at localhost ~]# . keystonerc_admin [root at localhost ~(keystone_admin)]# neutron router-gateway-clear router1 Removed gateway from router router1 [root at localhost ~(keystone_admin)]# neutron subnet-delete public_subnet Deleted subnet: public_subnet [root at localhost ~(keystone_admin)]# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.5.10,end=192.168.5.20 --gateway=192.168.5.1 public 192.168.5.0/24 Created a new subnet: +-------------------+--------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------+ | allocation_pools | {"start": "192.168.5.10", "end": "192.168.5.20"} | | cidr | 192.168.5.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 192.168.5.1 | | host_routes | | | id | 8f11b060-73a9-4b43-a3cc-be192436102c | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | public_subnet | | network_id | 7fbe63c2-0745-45c3-9f00-622ee0eb223b | | tenant_id | 636f926081a345fc93ca12fb5401ffe5 | +-------------------+--------------------------------------------------+ [root at localhost ~(keystone_admin)]# ? ________________________________ From: rdo-list-bounces at redhat.com on behalf of David Krovich Sent: Tuesday, December 23, 2014 3:56 PM To: rdo-list at redhat.com Subject: [Rdo-list] Single Node Openstack Hi, I'm trying to learn about how to setup and configure OpenStack. I've got a laptop that I want to use a test machine to run a single OpenStack node with instances appearing on the same network as the node itself. I'm trying to follow the instructions from this web site. https://openstack.redhat.com/Neutron_with_existing_external_network I'm running Fedora 20 on this laptop. My network range is 192.168.5.0/24. First question, does anyone have a similar setup? Fedora 20, single node, instances on the same network? I can get openstack installed via packstack and everything appears to work except that I can't seem to talk to the instances over the network. At this point I'm stuck and could use some advise on where to look further. Thanks. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Wed Dec 24 06:46:49 2014 From: ukalifon at redhat.com (Udi Kalifon) Date: Wed, 24 Dec 2014 01:46:49 -0500 (EST) Subject: [Rdo-list] Single Node Openstack In-Reply-To: <1419382760998.16431@mail.wvu.edu> References: <1419368192946.78002@mail.wvu.edu> <1419382760998.16431@mail.wvu.edu> Message-ID: <2034458810.1038731.1419403609175.JavaMail.zimbra@redhat.com> Usually this is because you forgot to allow ssh and icmp in the security group rules. It's easiest to configure if you use the GUI. Hope it helps. -- Udi. ----- Original Message ----- From: "David Krovich" To: rdo-list at redhat.com Sent: Wednesday, December 24, 2014 2:59:22 AM Subject: Re: [Rdo-list] Single Node Openstack Adding more information. ONBOOT=yes[root at localhost ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: p5p1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:22:41:28:14:20 brd ff:ff:ff:ff:ff:ff inet 192.168.5.151/24 brd 192.168.5.255 scope global dynamic p5p1 valid_lft 85871sec preferred_lft 85871sec inet6 fe80::222:41ff:fe28:1420/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: mtu 1500 qdisc noop state DOWN group default link/ether 22:4a:7f:81:49:15 brd ff:ff:ff:ff:ff:ff 4: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 32:1a:96:7a:7e:4a brd ff:ff:ff:ff:ff:ff inet 192.168.5.151/24 brd 192.168.5.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fe80::301a:96ff:fe7a:7e4a/64 scope link valid_lft forever preferred_lft forever 8: br-int: mtu 1500 qdisc noop state DOWN group default link/ether 32:99:19:54:f9:40 brd ff:ff:ff:ff:ff:ff 10: br-tun: mtu 1500 qdisc noop state DOWN group default link/ether 76:49:ac:a6:ce:4f brd ff:ff:ff:ff:ff:ff /etc/sysconfig/network-scripts/ifcfg-br-ex [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.5.151 NETMASK=255.255.255.0 ONBOOT=yes [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-p5p1 TYPE="OVSPort" DEVICETYPE="ovs" OVS_BRIDGE="br-ex" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" NAME="p5p1" UUID="70997a7b-a01c-48a6-b961-b11304839108" ONBOOT="yes" HWADDR="00:22:41:28:14:20" PEERDNS="yes" PEERROUTES="yes" Ran the following: [root at localhost ~]# . keystonerc_admin [root at localhost ~(keystone_admin)]# neutron router-gateway-clear router1 Removed gateway from router router1 [root at localhost ~(keystone_admin)]# neutron subnet-delete public_subnet Deleted subnet: public_subnet [root at localhost ~(keystone_admin)]# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.5.10,end=192.168.5.20 --gateway=192.168.5.1 public 192.168.5.0/24 Created a new subnet: +-------------------+--------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------+ | allocation_pools | {"start": "192.168.5.10", "end": "192.168.5.20"} | | cidr | 192.168.5.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 192.168.5.1 | | host_routes | | | id | 8f11b060-73a9-4b43-a3cc-be192436102c | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | public_subnet | | network_id | 7fbe63c2-0745-45c3-9f00-622ee0eb223b | | tenant_id | 636f926081a345fc93ca12fb5401ffe5 | +-------------------+--------------------------------------------------+ [root at localhost ~(keystone_admin)]# ? From: rdo-list-bounces at redhat.com on behalf of David Krovich Sent: Tuesday, December 23, 2014 3:56 PM To: rdo-list at redhat.com Subject: [Rdo-list] Single Node Openstack Hi, I'm trying to learn about how to setup and configure OpenStack. I've got a laptop that I want to use a test machine to run a single OpenStack node with instances appearing on the same network as the node itself. I'm trying to follow the instructions from this web site. https://openstack.redhat.com/Neutron_with_existing_external_network I'm running Fedora 20 on this laptop. My network range is 192.168.5.0/24. First question, does anyone have a similar setup? Fedora 20, single node, instances on the same network? I can get openstack installed via packstack and everything appears to work except that I can't seem to talk to the instances over the network. At this point I'm stuck and could use some advise on where to look further. Thanks. -Dave _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From david.krovich at mail.wvu.edu Wed Dec 24 17:23:49 2014 From: david.krovich at mail.wvu.edu (David Krovich) Date: Wed, 24 Dec 2014 12:23:49 -0500 Subject: [Rdo-list] Single Node Openstack In-Reply-To: <2034458810.1038731.1419403609175.JavaMail.zimbra@redhat.com> References: <1419368192946.78002@mail.wvu.edu> <1419382760998.16431@mail.wvu.edu> <2034458810.1038731.1419403609175.JavaMail.zimbra@redhat.com> Message-ID: <549AF6A5.6010603@mail.wvu.edu> Thanks, I think I had already adjusted my security groups appropriately. Here is a listing. [root at localhost ~(keystone_admin)]# neutron security-group-rule-list +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | id | security_group | direction | protocol | remote_ip_prefix | remote_group | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | 50b74169-5f5c-40f3-b193-d568e1cd2864 | default | egress | | | | | 5d3a0a6e-7d90-49a7-8114-998b06d525df | default | ingress | | | default | | 670a2b30-bc93-415c-9998-750334ce99d8 | default | egress | icmp | 0.0.0.0/0 | | | 68d7fb55-b04f-4b0e-b488-5f6a6f429616 | default | egress | | | | | 6ec01872-1735-4e46-8a4a-6e3a78e5d867 | default | ingress | | | default | | 747224b1-7415-49f4-ad77-1acb604508a0 | default | ingress | | | default | | 836c2c01-710f-44a1-8e85-826729c2f152 | default | ingress | udp | 0.0.0.0/0 | | | 8f9f6446-64c8-46f3-943a-d13723a92aa9 | default | ingress | | | default | | 939931a6-7769-4cb7-adef-3170285449a7 | default | egress | | | | | b1a2837c-6c64-4c31-9d4b-e50084db3212 | default | ingress | | | default | | ba1f61ba-9b3a-4618-935e-e6a9c23b3f34 | default | ingress | icmp | 0.0.0.0/0 | | | bc32a758-079d-4fd8-9668-e748d3b075ec | default | egress | | | | | bf27706a-4d85-4f54-b18d-99877155bfb2 | default | ingress | tcp | 0.0.0.0/0 | | | c315bdfa-fe04-490b-aab3-8422c79d1b7f | default | ingress | | | default | | cf799c38-222e-4e5b-9056-c3b7ebac40b5 | default | egress | | | | | e2d3ea34-ab71-4764-986e-da2545b81e39 | default | egress | | | | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ [root at localhost ~(keystone_admin)]# On 12/24/2014 01:46 AM, Udi Kalifon wrote: > Usually this is because you forgot to allow ssh and icmp in the security group rules. It's easiest to configure if you use the GUI. Hope it helps. > > -- Udi. > > > ----- Original Message ----- > From: "David Krovich" > To: rdo-list at redhat.com > Sent: Wednesday, December 24, 2014 2:59:22 AM > Subject: Re: [Rdo-list] Single Node Openstack > > > > Adding more information. > > > > > > ONBOOT=yes[root at localhost ~]# ip addr > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 2: p5p1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > > link/ether 00:22:41:28:14:20 brd ff:ff:ff:ff:ff:ff > > inet 192.168.5.151/24 brd 192.168.5.255 scope global dynamic p5p1 > > valid_lft 85871sec preferred_lft 85871sec > > inet6 fe80::222:41ff:fe28:1420/64 scope link > > valid_lft forever preferred_lft forever > > 3: ovs-system: mtu 1500 qdisc noop state DOWN group default > > link/ether 22:4a:7f:81:49:15 brd ff:ff:ff:ff:ff:ff > > 4: br-ex: mtu 1500 qdisc noqueue state UNKNOWN group default > > link/ether 32:1a:96:7a:7e:4a brd ff:ff:ff:ff:ff:ff > > inet 192.168.5.151/24 brd 192.168.5.255 scope global br-ex > > valid_lft forever preferred_lft forever > > inet6 fe80::301a:96ff:fe7a:7e4a/64 scope link > > valid_lft forever preferred_lft forever > > 8: br-int: mtu 1500 qdisc noop state DOWN group default > > link/ether 32:99:19:54:f9:40 brd ff:ff:ff:ff:ff:ff > > 10: br-tun: mtu 1500 qdisc noop state DOWN group default > > link/ether 76:49:ac:a6:ce:4f brd ff:ff:ff:ff:ff:ff > > > > > > > > > > > > /etc/sysconfig/network-scripts/ifcfg-br-ex > > > > > > [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex > > DEVICE=br-ex > > DEVICETYPE=ovs > > TYPE=OVSBridge > > BOOTPROTO=static > > IPADDR=192.168.5.151 > > NETMASK=255.255.255.0 > > ONBOOT=yes > > > > > > > > > [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-p5p1 > > TYPE="OVSPort" > > DEVICETYPE="ovs" > > OVS_BRIDGE="br-ex" > > DEFROUTE="yes" > > IPV4_FAILURE_FATAL="no" > > IPV6INIT="yes" > > IPV6_AUTOCONF="yes" > > IPV6_DEFROUTE="yes" > > IPV6_PEERDNS="yes" > > IPV6_PEERROUTES="yes" > > IPV6_FAILURE_FATAL="no" > > NAME="p5p1" > > UUID="70997a7b-a01c-48a6-b961-b11304839108" > > ONBOOT="yes" > > HWADDR="00:22:41:28:14:20" > > PEERDNS="yes" > > PEERROUTES="yes" > > > > > > Ran the following: > > > > > > [root at localhost ~]# . keystonerc_admin > > [root at localhost ~(keystone_admin)]# neutron router-gateway-clear router1 > > Removed gateway from router router1 > > [root at localhost ~(keystone_admin)]# neutron subnet-delete public_subnet > > Deleted subnet: public_subnet > > [root at localhost ~(keystone_admin)]# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.5.10,end=192.168.5.20 --gateway=192.168.5.1 public 192.168.5.0/24 > > Created a new subnet: > > +-------------------+--------------------------------------------------+ > > | Field | Value | > > +-------------------+--------------------------------------------------+ > > | allocation_pools | {"start": "192.168.5.10", "end": "192.168.5.20"} | > > | cidr | 192.168.5.0/24 | > > | dns_nameservers | | > > | enable_dhcp | False | > > | gateway_ip | 192.168.5.1 | > > | host_routes | | > > | id | 8f11b060-73a9-4b43-a3cc-be192436102c | > > | ip_version | 4 | > > | ipv6_address_mode | | > > | ipv6_ra_mode | | > > | name | public_subnet | > > | network_id | 7fbe63c2-0745-45c3-9f00-622ee0eb223b | > > | tenant_id | 636f926081a345fc93ca12fb5401ffe5 | > > +-------------------+--------------------------------------------------+ > > [root at localhost ~(keystone_admin)]# > > ? > > > > > > > > > > > From: rdo-list-bounces at redhat.com on behalf of David Krovich > Sent: Tuesday, December 23, 2014 3:56 PM > To: rdo-list at redhat.com > Subject: [Rdo-list] Single Node Openstack > > > Hi, > > > > > I'm trying to learn about how to setup and configure OpenStack. > > > > > I've got a laptop that I want to use a test machine to run a single OpenStack node with instances appearing on the same network as the node itself. I'm trying to follow the instructions from this web site. > > > > > https://openstack.redhat.com/Neutron_with_existing_external_network > > > I'm running Fedora 20 on this laptop. > > > > > My network range is 192.168.5.0/24. > > > > > First question, does anyone have a similar setup? Fedora 20, single node, instances on the same network? I can get openstack installed via packstack and everything appears to work except that I can't seem to talk to the instances over the network. At this point I'm stuck and could use some advise on where to look further. > > > > > Thanks. > > > > > -Dave > > > > > > > > > > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From david.krovich at mail.wvu.edu Wed Dec 24 17:31:39 2014 From: david.krovich at mail.wvu.edu (David Krovich) Date: Wed, 24 Dec 2014 12:31:39 -0500 Subject: [Rdo-list] Single Node Openstack In-Reply-To: <549AF6A5.6010603@mail.wvu.edu> References: <1419368192946.78002@mail.wvu.edu> <1419382760998.16431@mail.wvu.edu> <2034458810.1038731.1419403609175.JavaMail.zimbra@redhat.com> <549AF6A5.6010603@mail.wvu.edu> Message-ID: <549AF87B.1060401@mail.wvu.edu> More updates: I now have a public network and a internal network linked together with a router. I can create instances on the internal network and then associate a floating IP address with the instance. However, I still can't talk to the instances over the network. As of now I have an instance running with a floating IP of 192.168.5.11 assigned to it. I ran a packet sniffer on the laptop while trying to ping my router from the instance using the console built into openstack. I can see traffic on the bridge interface but nothing is answering. [root at localhost ~]# tshark Running as user "root" and group "root". This could be dangerous. Capturing on 'br-ex' 1 0.000000 fa:16:3e:a9:b0:f8 -> Broadcast ARP 42 Who has 192.168.5.1? Tell 192.168.5.11 2 1.001142 fa:16:3e:a9:b0:f8 -> Broadcast ARP 42 Who has 192.168.5.1? Tell 192.168.5.11 3 2.003167 fa:16:3e:a9:b0:f8 -> Broadcast ARP 42 Who has 192.168.5.1? Tell 192.168.5.11 If I try to ping from other machines in 192.168.5.0/24 to 192.168.5.11 I get no response and nothing even shows up on the bridge interface from sniffing. I'm trying to think what to look at next, any ideas? -Dave On 12/24/2014 12:23 PM, David Krovich wrote: > Thanks, I think I had already adjusted my security groups > appropriately. Here is a listing. > > > [root at localhost ~(keystone_admin)]# neutron security-group-rule-list > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > | id | security_group | direction | > protocol | remote_ip_prefix | remote_group | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > | 50b74169-5f5c-40f3-b193-d568e1cd2864 | default | egress > | | | | > | 5d3a0a6e-7d90-49a7-8114-998b06d525df | default | ingress > | | | default | > | 670a2b30-bc93-415c-9998-750334ce99d8 | default | egress | > icmp | 0.0.0.0/0 | | > | 68d7fb55-b04f-4b0e-b488-5f6a6f429616 | default | egress > | | | | > | 6ec01872-1735-4e46-8a4a-6e3a78e5d867 | default | ingress > | | | default | > | 747224b1-7415-49f4-ad77-1acb604508a0 | default | ingress > | | | default | > | 836c2c01-710f-44a1-8e85-826729c2f152 | default | ingress | > udp | 0.0.0.0/0 | | > | 8f9f6446-64c8-46f3-943a-d13723a92aa9 | default | ingress > | | | default | > | 939931a6-7769-4cb7-adef-3170285449a7 | default | egress > | | | | > | b1a2837c-6c64-4c31-9d4b-e50084db3212 | default | ingress > | | | default | > | ba1f61ba-9b3a-4618-935e-e6a9c23b3f34 | default | ingress | > icmp | 0.0.0.0/0 | | > | bc32a758-079d-4fd8-9668-e748d3b075ec | default | egress > | | | | > | bf27706a-4d85-4f54-b18d-99877155bfb2 | default | ingress | > tcp | 0.0.0.0/0 | | > | c315bdfa-fe04-490b-aab3-8422c79d1b7f | default | ingress > | | | default | > | cf799c38-222e-4e5b-9056-c3b7ebac40b5 | default | egress > | | | | > | e2d3ea34-ab71-4764-986e-da2545b81e39 | default | egress > | | | | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > [root at localhost ~(keystone_admin)]# > > > On 12/24/2014 01:46 AM, Udi Kalifon wrote: >> Usually this is because you forgot to allow ssh and icmp in the >> security group rules. It's easiest to configure if you use the GUI. >> Hope it helps. >> >> -- Udi. >> >> >> ----- Original Message ----- >> From: "David Krovich" >> To: rdo-list at redhat.com >> Sent: Wednesday, December 24, 2014 2:59:22 AM >> Subject: Re: [Rdo-list] Single Node Openstack >> >> >> >> Adding more information. >> >> >> >> >> >> ONBOOT=yes[root at localhost ~]# ip addr >> >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> group default >> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> >> inet 127.0.0.1/8 scope host lo >> >> valid_lft forever preferred_lft forever >> >> inet6 ::1/128 scope host >> >> valid_lft forever preferred_lft forever >> >> 2: p5p1: mtu 1500 qdisc pfifo_fast >> state UP group default qlen 1000 >> >> link/ether 00:22:41:28:14:20 brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.5.151/24 brd 192.168.5.255 scope global dynamic p5p1 >> >> valid_lft 85871sec preferred_lft 85871sec >> >> inet6 fe80::222:41ff:fe28:1420/64 scope link >> >> valid_lft forever preferred_lft forever >> >> 3: ovs-system: mtu 1500 qdisc noop state DOWN >> group default >> >> link/ether 22:4a:7f:81:49:15 brd ff:ff:ff:ff:ff:ff >> >> 4: br-ex: mtu 1500 qdisc noqueue >> state UNKNOWN group default >> >> link/ether 32:1a:96:7a:7e:4a brd ff:ff:ff:ff:ff:ff >> >> inet 192.168.5.151/24 brd 192.168.5.255 scope global br-ex >> >> valid_lft forever preferred_lft forever >> >> inet6 fe80::301a:96ff:fe7a:7e4a/64 scope link >> >> valid_lft forever preferred_lft forever >> >> 8: br-int: mtu 1500 qdisc noop state DOWN group >> default >> >> link/ether 32:99:19:54:f9:40 brd ff:ff:ff:ff:ff:ff >> >> 10: br-tun: mtu 1500 qdisc noop state DOWN >> group default >> >> link/ether 76:49:ac:a6:ce:4f brd ff:ff:ff:ff:ff:ff >> >> >> >> >> >> >> >> >> >> >> >> /etc/sysconfig/network-scripts/ifcfg-br-ex >> >> >> >> >> >> [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex >> >> DEVICE=br-ex >> >> DEVICETYPE=ovs >> >> TYPE=OVSBridge >> >> BOOTPROTO=static >> >> IPADDR=192.168.5.151 >> >> NETMASK=255.255.255.0 >> >> ONBOOT=yes >> >> >> >> >> >> >> >> >> [root at localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-p5p1 >> >> TYPE="OVSPort" >> >> DEVICETYPE="ovs" >> >> OVS_BRIDGE="br-ex" >> >> DEFROUTE="yes" >> >> IPV4_FAILURE_FATAL="no" >> >> IPV6INIT="yes" >> >> IPV6_AUTOCONF="yes" >> >> IPV6_DEFROUTE="yes" >> >> IPV6_PEERDNS="yes" >> >> IPV6_PEERROUTES="yes" >> >> IPV6_FAILURE_FATAL="no" >> >> NAME="p5p1" >> >> UUID="70997a7b-a01c-48a6-b961-b11304839108" >> >> ONBOOT="yes" >> >> HWADDR="00:22:41:28:14:20" >> >> PEERDNS="yes" >> >> PEERROUTES="yes" >> >> >> >> >> >> Ran the following: >> >> >> >> >> >> [root at localhost ~]# . keystonerc_admin >> >> [root at localhost ~(keystone_admin)]# neutron router-gateway-clear router1 >> >> Removed gateway from router router1 >> >> [root at localhost ~(keystone_admin)]# neutron subnet-delete public_subnet >> >> Deleted subnet: public_subnet >> >> [root at localhost ~(keystone_admin)]# neutron subnet-create --name >> public_subnet --enable_dhcp=False >> --allocation-pool=start=192.168.5.10,end=192.168.5.20 >> --gateway=192.168.5.1 public 192.168.5.0/24 >> >> Created a new subnet: >> >> +-------------------+--------------------------------------------------+ >> >> | Field | Value | >> >> +-------------------+--------------------------------------------------+ >> >> | allocation_pools | {"start": "192.168.5.10", "end": "192.168.5.20"} | >> >> | cidr | 192.168.5.0/24 | >> >> | dns_nameservers | | >> >> | enable_dhcp | False | >> >> | gateway_ip | 192.168.5.1 | >> >> | host_routes | | >> >> | id | 8f11b060-73a9-4b43-a3cc-be192436102c | >> >> | ip_version | 4 | >> >> | ipv6_address_mode | | >> >> | ipv6_ra_mode | | >> >> | name | public_subnet | >> >> | network_id | 7fbe63c2-0745-45c3-9f00-622ee0eb223b | >> >> | tenant_id | 636f926081a345fc93ca12fb5401ffe5 | >> >> +-------------------+--------------------------------------------------+ >> >> [root at localhost ~(keystone_admin)]# >> >> ? >> >> >> >> >> >> >> >> >> >> >> From: rdo-list-bounces at redhat.com on >> behalf of David Krovich >> Sent: Tuesday, December 23, 2014 3:56 PM >> To: rdo-list at redhat.com >> Subject: [Rdo-list] Single Node Openstack >> >> >> Hi, >> >> >> >> >> I'm trying to learn about how to setup and configure OpenStack. >> >> >> >> >> I've got a laptop that I want to use a test machine to run a single >> OpenStack node with instances appearing on the same network as the >> node itself. I'm trying to follow the instructions from this web site. >> >> >> >> >> https://openstack.redhat.com/Neutron_with_existing_external_network >> >> >> I'm running Fedora 20 on this laptop. >> >> >> >> >> My network range is 192.168.5.0/24. >> >> >> >> >> First question, does anyone have a similar setup? Fedora 20, single >> node, instances on the same network? I can get openstack installed >> via packstack and everything appears to work except that I can't seem >> to talk to the instances over the network. At this point I'm stuck >> and could use some advise on where to look further. >> >> >> >> >> Thanks. >> >> >> >> >> -Dave >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From dpkshetty at gmail.com Mon Dec 29 13:06:31 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Mon, 29 Dec 2014 18:36:31 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 Message-ID: Hi, I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) over CentOS7, but at the end of install it gave me this ... Questions prefixed with Q: inline below: Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats the recommended way here ?* * Warning: NetworkManager is active on , and . OpenStack networking currently does not work on systems that have the Network Manager service enabled. *Q: Do i need to disable NetworkManager.service on all or is it safe to ignore this? What exactly doesn't work with NetworkManager ?For my system it looks like below :[root at rhsdev1 packstack]# systemctl status NetworkManager.service NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Wed 2014-12-24 23:45:56 IST; 4 days ago Main PID: 2002 (NetworkManager)[root at rhsdev1 packstack]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: failed (Result: exit-code) since Wed 2014-12-24 23:45:57 IST; 4 days ago* thanx, deepak -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Mon Dec 29 14:28:39 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Mon, 29 Dec 2014 15:28:39 +0100 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: Message-ID: <54A16517.60803@laimbock.com> On 29-12-14 14:06, Deepak Shetty wrote: > Hi, > I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) > over CentOS7, but at the end of install it gave me this ... > Questions prefixed with Q: inline below: > > Additional information: > * Time synchronization installation was skipped. Please note that > unsynchronized time on server instances might be problem for some > OpenStack components. > > *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats > the recommended way here ?* All your nodes need to have the correct time. You can specify an NTP server in the Packstack answer file or as a CLI option and then Packstack will configure your nodes to use that NTP server. If you don't specify an NTP server then Packstack doesn't handle NTP so you will have to do it yourself. Either way, make sure that all nodes always have the correct time. > * Warning: NetworkManager is active on , and . OpenStack > networking currently does not work on systems that have the Network > Manager service enabled. > > *Q: Do i need to disable NetworkManager.service on all or is it safe to > ignore this? What exactly doesn't work with NetworkManager ? You need to disable NetworkManager and enable network service. Before you run Packstack you will also need to setup the ifcfg-XXXX network interfaces on all nodes and activate them. HTH, Patrick From dpkshetty at gmail.com Mon Dec 29 14:39:34 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Mon, 29 Dec 2014 20:09:34 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: <54A16517.60803@laimbock.com> References: <54A16517.60803@laimbock.com> Message-ID: On Dec 29, 2014 8:00 PM, "Patrick Laimbock" wrote: > > On 29-12-14 14:06, Deepak Shetty wrote: >> >> Hi, >> I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) >> over CentOS7, but at the end of install it gave me this ... >> Questions prefixed with Q: inline below: >> >> Additional information: >> * Time synchronization installation was skipped. Please note that >> unsynchronized time on server instances might be problem for some >> OpenStack components. >> >> *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats >> >> the recommended way here ?* > > > All your nodes need to have the correct time. You can specify an NTP server in the Packstack answer file or as a CLI option and then Packstack will configure your nodes to use that NTP server. If you don't specify an NTP server then Packstack doesn't handle NTP so you will have to do it yourself. Either way, make sure that all nodes always have the correct time. Thanks, will try this. > >> * Warning: NetworkManager is active on , and . OpenStack >> networking currently does not work on systems that have the Network >> Manager service enabled. >> >> *Q: Do i need to disable NetworkManager.service on all or is it safe to >> >> ignore this? What exactly doesn't work with NetworkManager ? > > > You need to disable NetworkManager and enable network service. Before you run Packstack you will also need to setup the ifcfg-XXXX network interfaces on all nodes and activate them. Why can't packstack handle this itself if it doesn't support NM? I m concerned about the manual steps involved and losing on my n/w connections in case i do anything wrong. Is there any reference on how to do this, i couldn't find anything specific on the quickstart page. > > HTH, > Patrick > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Mon Dec 29 15:34:47 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Mon, 29 Dec 2014 16:34:47 +0100 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> Message-ID: <54A17497.4070702@laimbock.com> On 29-12-14 15:39, Deepak Shetty wrote: > > You need to disable NetworkManager and enable network service. Before > you run Packstack you will also need to setup the ifcfg-XXXX network > interfaces on all nodes and activate them. > > Why can't packstack handle this itself if it doesn't support NM? I m Ask the developers. Patches welcome. > concerned about the manual steps involved and losing on my n/w > connections in case i do anything wrong. Yes that's a risk so make sure to have the steps figured out before doing anything. Having direct access to a (serial) console comes in handy if things fall apart. > Is there any reference on how > to do this, i couldn't find anything specific on the quickstart page. I'm not aware of an RDO guide on this subject. The steps on each node are: - create appropriate ifcfg-XXX files - stop NetworkManager - disable NetworkManager - enable network - start network You probably want to do the last 4 steps in one long command or else you will loose connectivity after stopping NetworkManager if you are not using a console. HTH, Patrick From dpkshetty at gmail.com Mon Dec 29 17:27:01 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Mon, 29 Dec 2014 22:57:01 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: <54A17497.4070702@laimbock.com> References: <54A16517.60803@laimbock.com> <54A17497.4070702@laimbock.com> Message-ID: Thanks patrick. Are you aware of any link that can give me some insights on what exactly doesn't work in networking with NM enabled, i m guessing that if it some specific stuff i can just avoid using it maybe! On Dec 29, 2014 9:06 PM, "Patrick Laimbock" wrote: > On 29-12-14 15:39, Deepak Shetty wrote: > >> > You need to disable NetworkManager and enable network service. Before >> you run Packstack you will also need to setup the ifcfg-XXXX network >> interfaces on all nodes and activate them. >> >> Why can't packstack handle this itself if it doesn't support NM? I m >> > > Ask the developers. Patches welcome. > > concerned about the manual steps involved and losing on my n/w >> connections in case i do anything wrong. >> > > Yes that's a risk so make sure to have the steps figured out before doing > anything. Having direct access to a (serial) console comes in handy if > things fall apart. > > Is there any reference on how >> to do this, i couldn't find anything specific on the quickstart page. >> > > I'm not aware of an RDO guide on this subject. The steps on each node are: > - create appropriate ifcfg-XXX files > - stop NetworkManager > - disable NetworkManager > - enable network > - start network > > You probably want to do the last 4 steps in one long command or else you > will loose connectivity after stopping NetworkManager if you are not using > a console. > > HTH, > Patrick > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Mon Dec 29 18:42:57 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Mon, 29 Dec 2014 19:42:57 +0100 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> <54A17497.4070702@laimbock.com> Message-ID: <54A1A0B1.7090309@laimbock.com> On 29-12-14 18:27, Deepak Shetty wrote: > Thanks patrick. > Are you aware of any link that can give me some insights on what exactly > doesn't work in networking with NM enabled, i m guessing that if it some > specific stuff i can just avoid using it maybe! I'm not aware of any link or particular reason. Maybe file an RFE, ask the developers or dig into the Packstack code and/or Puppet OpenStack modules. https://wiki.openstack.org/wiki/Packstack https://wiki.openstack.org/wiki/Puppet-openstack HTH, Patrick From dpkshetty at gmail.com Tue Dec 30 06:11:01 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Tue, 30 Dec 2014 11:41:01 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: <54A17497.4070702@laimbock.com> References: <54A16517.60803@laimbock.com> <54A17497.4070702@laimbock.com> Message-ID: On Mon, Dec 29, 2014 at 9:04 PM, Patrick Laimbock wrote: > On 29-12-14 15:39, Deepak Shetty wrote: > >> > You need to disable NetworkManager and enable network service. Before >> you run Packstack you will also need to setup the ifcfg-XXXX network >> interfaces on all nodes and activate them. >> >> Why can't packstack handle this itself if it doesn't support NM? I m >> > > Ask the developers. Patches welcome. > > concerned about the manual steps involved and losing on my n/w >> connections in case i do anything wrong. >> > > Yes that's a risk so make sure to have the steps figured out before doing > anything. Having direct access to a (serial) console comes in handy if > things fall apart. > > Is there any reference on how >> to do this, i couldn't find anything specific on the quickstart page. >> > > I'm not aware of an RDO guide on this subject. The steps on each node are: > - create appropriate ifcfg-XXX files > - stop NetworkManager > - disable NetworkManager > - enable network > - start network > > You probably want to do the last 4 steps in one long command or else you > will loose connectivity after stopping NetworkManager if you are not using > a console. My current ifcfg-em1 is : [root at rhsdev1 network-scripts]# cat ifcfg-em1 # Generated by dracut initrd DEVICE="em1" ONBOOT=yes NETBOOT=yes UUID="fd67c34e-9aad-44b7-a980-b5288ad3c442" IPV6INIT=yes BOOTPROTO=dhcp HWADDR="c8:1f:66:c6:d5:fc" TYPE=Ethernet NAME="em1" I did this: # systemctl stop NetworkManager.service ; chkconfig NetworkManager off; systemctl restart network.service ; chkconfig network on Note: Forwarding request to 'systemctl disable NetworkManager.service'. rm '/etc/systemd/system/multi-user.target.wants/NetworkManager.service' rm '/etc/systemd/system/dbus-org.freedesktop.NetworkManager.service' rm '/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service' Job for network.service failed. See 'systemctl status network.service' and 'journalctl -xn' for details. [root at rhsdev1 network-scripts]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: failed (Result: exit-code) since Tue 2014-12-30 17:01:00 IST; 30s ago Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK answers: File exists Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com systemd[1]: network.service: control process exited, code=exited status=1 Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Failed to start LSB: Bring up/down networking. Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Unit network.service entered failed state. I modified em1 to: [root at rhsdev1 network-scripts]# cat ifcfg-em1 # Generated by dracut initrd DEVICE="em1" ONBOOT=yes NETBOOT=yes UUID="fd67c34e-9aad-44b7-a980-b5288ad3c442" IPV6INIT=yes BOOTPROTO=dhcp #HWADDR="c8:1f:66:c6:d5:fc" NM_CONTROLLED=no TYPE=Ethernet NAME="em1" [root at rhsdev1 network-scripts]# service network restart Restarting network (via systemctl): Job for network.service failed. See 'systemctl status network.service' and 'journalctl -xn' for details. [FAILED] [root at rhsdev1 network-scripts]# systemctl restart network.service Job for network.service failed. See 'systemctl status network.service' and 'journalctl -xn' for details. [root at rhsdev1 network-scripts]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: failed (Result: exit-code) since Tue 2014-12-30 17:07:34 IST; 6s ago Process: 26318 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE) Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK answers: File exists Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com systemd[1]: network.service: control process exited, code=exited status=1 Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Failed to start LSB: Bring up/down networking. Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Unit network.service entered failed state. *So in short, disabling NM and enabling/restarting network isn't working as my network service is getting into error state*thanx, deepak > > HTH, > Patrick > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpkshetty at gmail.com Tue Dec 30 06:38:29 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Tue, 30 Dec 2014 12:08:29 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> <54A17497.4070702@laimbock.com> Message-ID: Adding more details on the fact that I was unable to chkconfig OFF network.service [root at rhsdev1 multi-user.target.wants]# systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: failed (Result: exit-code) since Tue 2014-12-30 17:13:10 IST; 17min ago Dec 30 17:13:08 rhsdev1.lab.eng.blr.redhat.com dhclient[2169]: DHCPDISCOVER on em1 to 255.255.255.255 port 67 interval 11 (xid=0x35011ab0) Dec 30 17:13:08 rhsdev1.lab.eng.blr.redhat.com dhclient[2169]: DHCPREQUEST on em1 to 255.255.255.255 port 67 (xid=0x35011ab0) Dec 30 17:13:08 rhsdev1.lab.eng.blr.redhat.com dhclient[2169]: DHCPOFFER from 10.70.47.254 Dec 30 17:13:08 rhsdev1.lab.eng.blr.redhat.com dhclient[2169]: DHCPACK from 10.70.47.254 (xid=0x35011ab0) Dec 30 17:13:10 rhsdev1.lab.eng.blr.redhat.com dhclient[2169]: bound to 10.70.45.1 -- renewal in 38250 seconds. Dec 30 17:13:10 rhsdev1.lab.eng.blr.redhat.com network[1964]: Determining IP information for em1... done. Dec 30 17:13:10 rhsdev1.lab.eng.blr.redhat.com network[1964]: [ OK ] Dec 30 17:13:10 rhsdev1.lab.eng.blr.redhat.com systemd[1]: network.service: control process exited, code=exited status=1 Dec 30 17:13:10 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Failed to start LSB: Bring up/down networking. Dec 30 17:13:10 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Unit network.service entered failed state. [root at rhsdev1 multi-user.target.wants]# chkconfig network on [root at rhsdev1 multi-user.target.wants]# chkconfig network [root at rhsdev1 multi-user.target.wants]# service network status Configured devices: lo br-ex em1 em2 Currently active devices: lo em1 virbr0 [root at rhsdev1 multi-user.target.wants]# systemctl disable network.service network.service is not a native service, redirecting to /sbin/chkconfig. Executing /sbin/chkconfig network off [root at rhsdev1 multi-user.target.wants]# systemctl enable network.service network.service is not a native service, redirecting to /sbin/chkconfig. Executing /sbin/chkconfig network on The unit files have no [Install] section. They are not meant to be enabled using systemctl. Possible reasons for having this kind of units are: 1) A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. 2) A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. 3) A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). *But the interesting news is that, post system reboot:* I have NM disabled, network service failed state, but still networking works for my server I am able to ping it, ssh into it, both inbound and outbound networking are working fine If both NM and network service are down, whats managing the networking here ? Is there some other systemd target/unit file that I need to enable instead of network.service ? thanx, deepak On Tue, Dec 30, 2014 at 11:41 AM, Deepak Shetty wrote: > > > On Mon, Dec 29, 2014 at 9:04 PM, Patrick Laimbock > wrote: > >> On 29-12-14 15:39, Deepak Shetty wrote: >> >>> > You need to disable NetworkManager and enable network service. Before >>> you run Packstack you will also need to setup the ifcfg-XXXX network >>> interfaces on all nodes and activate them. >>> >>> Why can't packstack handle this itself if it doesn't support NM? I m >>> >> >> Ask the developers. Patches welcome. >> >> concerned about the manual steps involved and losing on my n/w >>> connections in case i do anything wrong. >>> >> >> Yes that's a risk so make sure to have the steps figured out before doing >> anything. Having direct access to a (serial) console comes in handy if >> things fall apart. >> >> Is there any reference on how >>> to do this, i couldn't find anything specific on the quickstart page. >>> >> >> I'm not aware of an RDO guide on this subject. The steps on each node are: >> - create appropriate ifcfg-XXX files >> - stop NetworkManager >> - disable NetworkManager >> - enable network >> - start network >> >> You probably want to do the last 4 steps in one long command or else you >> will loose connectivity after stopping NetworkManager if you are not using >> a console. > > > My current ifcfg-em1 is : > > [root at rhsdev1 network-scripts]# cat ifcfg-em1 > # Generated by dracut initrd > DEVICE="em1" > ONBOOT=yes > NETBOOT=yes > UUID="fd67c34e-9aad-44b7-a980-b5288ad3c442" > IPV6INIT=yes > BOOTPROTO=dhcp > HWADDR="c8:1f:66:c6:d5:fc" > TYPE=Ethernet > NAME="em1" > > I did this: > > # systemctl stop NetworkManager.service ; chkconfig NetworkManager off; > systemctl restart network.service ; chkconfig network on > > Note: Forwarding request to 'systemctl disable NetworkManager.service'. > rm '/etc/systemd/system/multi-user.target.wants/NetworkManager.service' > rm '/etc/systemd/system/dbus-org.freedesktop.NetworkManager.service' > rm '/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service' > Job for network.service failed. See 'systemctl status network.service' and > 'journalctl -xn' for details. > > [root at rhsdev1 network-scripts]# systemctl status network.service > network.service - LSB: Bring up/down networking > Loaded: loaded (/etc/rc.d/init.d/network) > Active: failed (Result: exit-code) since Tue 2014-12-30 17:01:00 IST; > 30s ago > > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com network[24948]: RTNETLINK > answers: File exists > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com systemd[1]: > network.service: control process exited, code=exited status=1 > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Failed to > start LSB: Bring up/down networking. > Dec 30 17:01:00 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Unit > network.service entered failed state. > > I modified em1 to: > > [root at rhsdev1 network-scripts]# cat ifcfg-em1 > # Generated by dracut initrd > DEVICE="em1" > ONBOOT=yes > NETBOOT=yes > UUID="fd67c34e-9aad-44b7-a980-b5288ad3c442" > IPV6INIT=yes > BOOTPROTO=dhcp > #HWADDR="c8:1f:66:c6:d5:fc" > NM_CONTROLLED=no > TYPE=Ethernet > NAME="em1" > > [root at rhsdev1 network-scripts]# service network restart > Restarting network (via systemctl): Job for network.service failed. See > 'systemctl status network.service' and 'journalctl -xn' for details. > [FAILED] > [root at rhsdev1 network-scripts]# systemctl restart network.service > Job for network.service failed. See 'systemctl status network.service' and > 'journalctl -xn' for details. > [root at rhsdev1 network-scripts]# systemctl status network.service > network.service - LSB: Bring up/down networking > Loaded: loaded (/etc/rc.d/init.d/network) > Active: failed (Result: exit-code) since Tue 2014-12-30 17:07:34 IST; > 6s ago > Process: 26318 ExecStart=/etc/rc.d/init.d/network start (code=exited, > status=1/FAILURE) > > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com systemd[1]: > network.service: control process exited, code=exited status=1 > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Failed to > start LSB: Bring up/down networking. > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com systemd[1]: Unit > network.service entered failed state. > > > > *So in short, disabling NM and enabling/restarting network isn't working > as my network service is getting into error state*thanx, > deepak > > > > >> >> HTH, >> Patrick >> >> _______________________________________________ >> Rdo-list mailing list >> Rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpkshetty at gmail.com Tue Dec 30 06:39:37 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Tue, 30 Dec 2014 12:09:37 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> <54A17497.4070702@laimbock.com> Message-ID: On Tue, Dec 30, 2014 at 12:08 PM, Deepak Shetty wrote: > Adding more details on the fact that I was unable to chkconfig OFF > network.service > Yuck! I meant chkconfig ON network.service. Rest all info is correct. thanx, deepak -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpkshetty at gmail.com Tue Dec 30 08:39:02 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Tue, 30 Dec 2014 14:09:02 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: <54A16517.60803@laimbock.com> References: <54A16517.60803@laimbock.com> Message-ID: On Mon, Dec 29, 2014 at 7:58 PM, Patrick Laimbock wrote: > On 29-12-14 14:06, Deepak Shetty wrote: > >> Hi, >> I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) >> over CentOS7, but at the end of install it gave me this ... >> Questions prefixed with Q: inline below: >> >> Additional information: >> * Time synchronization installation was skipped. Please note that >> unsynchronized time on server instances might be problem for some >> OpenStack components. >> >> *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats >> the recommended way here ?* >> > > All your nodes need to have the correct time. You can specify an NTP > server in the Packstack answer file or as a CLI option and then Packstack > will configure your nodes to use that NTP server. If you don't specify an > NTP server then Packstack doesn't handle NTP so you will have to do it > yourself. Either way, make sure that all nodes always have the correct time. > FWIW, i did the below and re-ran my packstack: CONFIG_NTP_SERVERS= in my answers file # packstack --answer-file ./packstack-answers.txt It showed ... ... ... Installing time synchronization via NTP [ DONE ] ... .. Now checking on my nodes, I don't still see NTP service running... [root at rhsdev4 ~]# ps aux| grep ntp root 20433 0.0 0.0 112640 960 pts/0 S+ 13:58 0:00 grep --color=auto ntp [root at rhsdev4 ~]# systemctl status ntpd.service ntpd.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead) [root at rhsdev4 ~]# systemctl status ntpdate.service ntpdate.service - Set time via NTP Loaded: loaded (/usr/lib/systemd/system/ntpdate.service; disabled) Active: inactive (dead) *Now what did i do wrong ? :)* thanx, deepak -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpkshetty at gmail.com Tue Dec 30 08:41:52 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Tue, 30 Dec 2014 14:11:52 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> Message-ID: Nevermind this one... I figured that NTP is replaced by chrony.service, which is running now! thanx, deepak On Tue, Dec 30, 2014 at 2:09 PM, Deepak Shetty wrote: > > > On Mon, Dec 29, 2014 at 7:58 PM, Patrick Laimbock > wrote: > >> On 29-12-14 14:06, Deepak Shetty wrote: >> >>> Hi, >>> I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) >>> over CentOS7, but at the end of install it gave me this ... >>> Questions prefixed with Q: inline below: >>> >>> Additional information: >>> * Time synchronization installation was skipped. Please note that >>> unsynchronized time on server instances might be problem for some >>> OpenStack components. >>> >>> *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats >>> the recommended way here ?* >>> >> >> All your nodes need to have the correct time. You can specify an NTP >> server in the Packstack answer file or as a CLI option and then Packstack >> will configure your nodes to use that NTP server. If you don't specify an >> NTP server then Packstack doesn't handle NTP so you will have to do it >> yourself. Either way, make sure that all nodes always have the correct time. >> > > > FWIW, i did the below and re-ran my packstack: > > CONFIG_NTP_SERVERS= in my answers file > > # packstack --answer-file ./packstack-answers.txt > It showed ... > ... > ... > > Installing time synchronization via NTP [ DONE ] > ... > .. > > Now checking on my nodes, I don't still see NTP service running... > > [root at rhsdev4 ~]# ps aux| grep ntp > root 20433 0.0 0.0 112640 960 pts/0 S+ 13:58 0:00 grep > --color=auto ntp > > [root at rhsdev4 ~]# systemctl status ntpd.service > ntpd.service > Loaded: not-found (Reason: No such file or directory) > Active: inactive (dead) > > [root at rhsdev4 ~]# systemctl status ntpdate.service > ntpdate.service - Set time via NTP > Loaded: loaded (/usr/lib/systemd/system/ntpdate.service; disabled) > Active: inactive (dead) > > > > *Now what did i do wrong ? :)* > thanx, > deepak > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Tue Dec 30 13:58:56 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Tue, 30 Dec 2014 14:58:56 +0100 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> <54A17497.4070702@laimbock.com> Message-ID: <54A2AFA0.9070803@laimbock.com> On 30-12-14 07:11, Deepak Shetty wrote: > I modified em1 to: > > [root at rhsdev1 network-scripts]# cat ifcfg-em1 > # Generated by dracut initrd > DEVICE="em1" > ONBOOT=yes > NETBOOT=yes > UUID="fd67c34e-9aad-44b7-a980-b5288ad3c442" > IPV6INIT=yes > BOOTPROTO=dhcp > #HWADDR="c8:1f:66:c6:d5:fc" Try enabling HWADDR= and set it to the proper MAC address. > NM_CONTROLLED=no > TYPE=Ethernet > NAME="em1" Example that works for me: DEVICE=eth0 HWADDR=68:11:34:19:63:a3 TYPE=Ethernet BOOTPROTO=dhcp NAME=eth0 ONBOOT=yes NM_CONTROLLED=no > [root at rhsdev1 network-scripts]# service network restart > Restarting network (via systemctl): Job for network.service failed. See > 'systemctl status network.service' and 'journalctl -xn' for details. [snip] > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com > network[26318]: RTNETLINK > answers: File exists > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com > systemd[1]: network.service: > control process exited, code=exited status=1 > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com > systemd[1]: Failed to start LSB: > Bring up/down networking. > Dec 30 17:07:34 rhsdev1.lab.eng.blr.redhat.com > systemd[1]: Unit network.service > entered failed state. > > *So in short, disabling NM and enabling/restarting network isn't working > as my network service is getting into error state That can be caused by multiple things. Make sure the MAC address is correct and make sure there are no routes lingering around before starting the network service (they should be deleted before starting the network service). Also https://access.redhat.com/solutions/26543 You can check the MAC address with: # ip address show You can check the routes with: # ip route show And delete where appropriate with: # ip route del .... Maybe it takes a while before things settle down after NetworkManager is stopped. You could try to insert a "sleep 1" in your command before starting the network service and see if that makes any difference. HTH, Patrick From pmyers at redhat.com Wed Dec 31 13:52:06 2014 From: pmyers at redhat.com (Perry Myers) Date: Wed, 31 Dec 2014 08:52:06 -0500 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> Message-ID: <54A3FF86.5090208@redhat.com> On 12/29/2014 09:39 AM, Deepak Shetty wrote: > > On Dec 29, 2014 8:00 PM, "Patrick Laimbock" > wrote: >> >> On 29-12-14 14:06, Deepak Shetty wrote: >>> >>> Hi, >>> I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) >>> over CentOS7, but at the end of install it gave me this ... >>> Questions prefixed with Q: inline below: >>> >>> Additional information: >>> * Time synchronization installation was skipped. Please note that >>> unsynchronized time on server instances might be problem for some >>> OpenStack components. >>> >>> *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats >>> >>> the recommended way here ?* >> >> >> All your nodes need to have the correct time. You can specify an NTP > server in the Packstack answer file or as a CLI option and then > Packstack will configure your nodes to use that NTP server. If you don't > specify an NTP server then Packstack doesn't handle NTP so you will have > to do it yourself. Either way, make sure that all nodes always have the > correct time. > > Thanks, will try this. > >> >>> * Warning: NetworkManager is active on , and . OpenStack >>> networking currently does not work on systems that have the Network >>> Manager service enabled. >>> >>> *Q: Do i need to disable NetworkManager.service on all or is it safe to >>> >>> ignore this? What exactly doesn't work with NetworkManager ? >> >> >> You need to disable NetworkManager and enable network service. Before >> you run Packstack you will also need to setup the ifcfg-XXXX network >> interfaces on all nodes and activate them. > > Why can't packstack handle this itself if it doesn't support NM? I m > concerned about the manual steps involved and losing on my n/w > connections in case i do anything wrong. Is there any reference on how > to do this, i couldn't find anything specific on the quickstart page. Packstack uses SSH to remotely configure hosts. Trying to disable NM and enable standard ifcfg networking while using an ssh session is very tricky and often results in hosts that have completely lost networking. So, we don't attempt to do this. Instead, we recommend using a kickstart to deploy your machines that disables NM at host install time As for why NM needs to be disabled, there are a bunch of open bugs against NM targeted at RHEL 7.1 (or maybe later) where NM and Neutron networking conflict with each other. I think for Nova networking NM might be fine, but for Neutron there are known issues. Livnat might have a list of NM bugs handy that would need to be resolved before we can begin looking at testing NM + Neutron together again to determine if there are more issues lying in wait, or if we can green-light that combination Perry From dpkshetty at gmail.com Wed Dec 31 14:34:58 2014 From: dpkshetty at gmail.com (Deepak Shetty) Date: Wed, 31 Dec 2014 20:04:58 +0530 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: <54A3FF86.5090208@redhat.com> References: <54A16517.60803@laimbock.com> <54A3FF86.5090208@redhat.com> Message-ID: In which case RDO quickstart or some other place should have good info on how to disable NM and enable ifcfg networking for started who look at quickstart or RDO faqs to get hints thanx, deepak On Wed, Dec 31, 2014 at 7:22 PM, Perry Myers wrote: > On 12/29/2014 09:39 AM, Deepak Shetty wrote: > > > > On Dec 29, 2014 8:00 PM, "Patrick Laimbock" > > wrote: > >> > >> On 29-12-14 14:06, Deepak Shetty wrote: > >>> > >>> Hi, > >>> I was able to install 3-node RDO juno-1 (rdo-release-juno-1.noarch) > >>> over CentOS7, but at the end of install it gave me this ... > >>> Questions prefixed with Q: inline below: > >>> > >>> Additional information: > >>> * Time synchronization installation was skipped. Please note that > >>> unsynchronized time on server instances might be problem for some > >>> OpenStack components. > >>> > >>> *Q: Do i need to sue ntpd to ensure all my systems are in sync, whats > >>> > >>> the recommended way here ?* > >> > >> > >> All your nodes need to have the correct time. You can specify an NTP > > server in the Packstack answer file or as a CLI option and then > > Packstack will configure your nodes to use that NTP server. If you don't > > specify an NTP server then Packstack doesn't handle NTP so you will have > > to do it yourself. Either way, make sure that all nodes always have the > > correct time. > > > > Thanks, will try this. > > > >> > >>> * Warning: NetworkManager is active on , and . > OpenStack > >>> networking currently does not work on systems that have the Network > >>> Manager service enabled. > >>> > >>> *Q: Do i need to disable NetworkManager.service on all or is it safe to > >>> > >>> ignore this? What exactly doesn't work with NetworkManager ? > >> > >> > >> You need to disable NetworkManager and enable network service. Before > >> you run Packstack you will also need to setup the ifcfg-XXXX network > >> interfaces on all nodes and activate them. > > > > Why can't packstack handle this itself if it doesn't support NM? I m > > concerned about the manual steps involved and losing on my n/w > > connections in case i do anything wrong. Is there any reference on how > > to do this, i couldn't find anything specific on the quickstart page. > > Packstack uses SSH to remotely configure hosts. Trying to disable NM and > enable standard ifcfg networking while using an ssh session is very > tricky and often results in hosts that have completely lost networking. > So, we don't attempt to do this. Instead, we recommend using a kickstart > to deploy your machines that disables NM at host install time > > As for why NM needs to be disabled, there are a bunch of open bugs > against NM targeted at RHEL 7.1 (or maybe later) where NM and Neutron > networking conflict with each other. I think for Nova networking NM > might be fine, but for Neutron there are known issues. > > Livnat might have a list of NM bugs handy that would need to be resolved > before we can begin looking at testing NM + Neutron together again to > determine if there are more issues lying in wait, or if we can > green-light that combination > > Perry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Dec 31 23:04:06 2014 From: pmyers at redhat.com (Perry Myers) Date: Wed, 31 Dec 2014 18:04:06 -0500 Subject: [Rdo-list] Query regarding RDO Juno-1 install on CentOS7 In-Reply-To: References: <54A16517.60803@laimbock.com> <54A3FF86.5090208@redhat.com> Message-ID: <54A480E6.2000901@redhat.com> On 12/31/2014 09:34 AM, Deepak Shetty wrote: > In which case RDO quickstart or some other place should have good info > on how to disable NM and enable ifcfg networking for started who look at > quickstart or RDO faqs to get hints google search for 'rdo disable network manager' turned up this: https://openstack.redhat.com/Fedora_20_with_existing_network it's a bit more than you would need, but in there are simple steps for disabling network manager But I agree that having more accessible HOWTOs for this specific area would make sense. This wiki page is Fedora specific (though the same steps should work on CentOS 7 I believe) and doesn't cover initial installation, only after install how to disable NM. I believe though, that removing network manager in a kickstart based installation of CentOS or Fedora is as simple as adding NetworkManager to the list of blacklisted packages. (i.e. under %packages add -NetworkManager) [1] Perry [1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s1-kickstart2-packageselection.html