From ayoung at redhat.com Thu Jan 3 03:09:06 2013 From: ayoung at redhat.com (Adam Young) Date: Wed, 02 Jan 2013 22:09:06 -0500 Subject: [rhos-list] Keystone user authentication with existing LDAP In-Reply-To: <1354807874.bac49b876d5dfc9cd169c22ef5178ca7@mail.in.com> References: <1354807874.bac49b876d5dfc9cd169c22ef5178ca7@mail.in.com> Message-ID: <50E4F652.8090002@redhat.com> On 12/06/2012 10:31 AM, Kumar Vaibhav wrote: > > > ---------- Original message ---------- > From:"Adam Young"< ayoung at redhat.com > > Date: 6 Dec 12 20:05:14 > Subject: Re: [rhos-list] Keystone user authentication with > existing LDAP > To: rhos-list at redhat.com > > On 12/06/2012 07:26 AM, Kumar Vaibhav wrote: >> Hi, >> >> I want to authenticate my users with existing OpenLDAP server. It >> already have the username and password for users. >> I use this OpenLDAP server for authenticating Linux servers in >> the network. >> >> Is it possible to keep only user information in LDAP.? > > Not yet, sorry. > >> >> Since my LDAP server do not have Role, Group, and other Tree DN >> available, I want these to be stored in database only. > >> Can you not modify the LDAP schema? These are trivial to > maintain in LDAP. > > >> Or, are you not going to be modifying the User list? > > Yes I don't want to modify the user list or their Attributes. This > LDAP server is managed by other system. > > >> One thing you can try is to sync the user list over to the SQL > Database without passwords, run Keystone in apache and use > mod_auth_ldap to log in. It is an untested configuration, but it > should work. > > It is easy for me to sync the user name and password from the LDAP > to MySQL DB. But the password I have in LDAP is MD5 encrypted. > Openstack-Keystone uses other encryption algorithm. > Is it possible to use MD5 as encryption method for keystone? > As I recall, LDAP defaults to Salted SHA1 for passwords, and a simple bind assumes that format. The code to do this is in keystone/common/utils: def ldap_hash_password(password): """Hash a password. Hard.""" password_utf8 = trunc_password(password).encode('utf-8') h = passlib.hash.ldap_salted_sha1.encrypt(password_utf8) return h Authenticating against that should work fine. Are you sure you are in MD5 format? > > >> >> I should have used Only DB also but the problem is my OpenLDAP >> server has passwords encrypted in MD5. >> >> Regards, >> Vaibhav >> >> >> Get Yourself a cool, short *@in.com* Email ID now! >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > > > > Get Yourself a cool, short *@in.com* Email ID now! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Mon Jan 14 20:35:04 2013 From: ayoung at redhat.com (Adam Young) Date: Mon, 14 Jan 2013 15:35:04 -0500 Subject: [rhos-list] Openstack Keystone Status Jan 14, 2013 Message-ID: <50F46BF8.6030007@redhat.com> Current status For Red Hat Open Stack Keystone as of Jan 14, 2012 maintained Here: http://openstack.etherpad.corp.redhat.com/keystone Keystone Upstream Core Devs: Joe Heck ( will be stepping down as PTL). Dolph Matthews poised to take PTL Henry Nash (IBM) Guang Yee (HP) Adam Young Things are looking to move faster with 2 new core devs. THey hyave been both active in code reviews. Not Core but Active: David Chadwick (Univ. of Kent) Kristy Sui (Univ. of Kent) Brad Topol and K. Sahdev from IBM are going to start on LDAP work, to include Backlog item of supporting LDAP in Devstack Current Development: G-2 interim release out last week. * Trusts (ayoung) Have been posted as a Work In Progress. Won't be in G-2 * https://review.openstack.org/#/c/18973/ * http://wiki.openstack.org/Keystone/Trusts * https://blueprints.launchpad.net/keystone/+spec/trusts * https://bugzilla.redhat.com/show_bug.cgi?id=894925 * Defining Proejct membership to mean role assignment: * Discovered as an Issue with the V3 API * https://blueprints.launchpad.net/keystone/+spec/replace-tenant-user-membership * Trusts dependant on implementing * Scoping a token to a Domain * https://blueprints.launchpad.net/openstack/?searchtext=domain-scoping * https://review.openstack.org/#/c/18770/ * This needs to be followed with "Scoping a token to an Endpoint" * Discussion about whether to allow a token scoped to multiple projects * My view: should be allowed, but not the norm, and used only for use cases invloving transferring resources between projects. * Would change auth_token behaviour if allowed. * Test Keystone againstLive SQL Posted for a review * https://review.openstack.org/#/c/18519/ * This is only for SQL Upgrade tests * going to require additional work for the real Unit tests due to how DB schema is managed * Enhance wsgi to listen on ipv6 address * https://review.openstack.org/#/c/19400/ * Better SSL support * https://review.openstack.org/#/c/19562/ * Limit the size of HTTP requests. * https://review.openstack.org/#/c/19567/1 * Stable: Render content-type appropriate 404 (bug 1089987) * Needs stable reviewers * https://review.openstack.org/#/c/18049/ Some discussion about doing things via User names and Project names. All have identitified that it would be preferable, but we need to make sure names are URL ready. Keystoneupstreamteam meeting (follows immediatly after RH OS Team meeting) * Weekly - Tuesdays at 1800 UTC for ~45 minutes * IRC channel: #openstack-meeting * Chair (to contact for more information): Joseph Heck * Agenda http://wiki.openstack.org/Meetings/KeystoneMeeting Red Hat Open Stack status Responded to Call for Papers with a FreeIPA/Open Stack integration proposal Summit talk: http://etherpad.corp.redhat.com/IdMOpenStack RH Members: * Adam Young https://home.corp.redhat.com/user/ayoung * Alan Pevec https://home.corp.redhat.com/user/apevec Potential Members: * Kurt Seifried https://home.corp.redhat.com/user/kseifrie * Russell Bryant (Security Response) https://home.corp.redhat.com/user/rbryant * QA? * IdM team member? Recruiting Status: * Planning on attending the Job Fairs at WPI and RPI * Discussed hiring in Brno with assisstance of Dmitri's team Fedora Status (Package versions, dependnecies and issues etc) * Raw Hide has Grizzly-2 openstack-keystone-2013.1-0.2.g2.fc19 * el6-grizzly side-repo http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ stable/folsom update 1(no change from Jan 8): * F18 https://admin.fedoraproject.org/updates/openstack-keystone-2012.2.1-1.fc18 * EPELhttps://admin.fedoraproject.org/updates/openstack-keystone-2012.2.1-1.el6 * RHOS https://errata.devel.redhat.com/advisory/14265 RH QA Status Backlog: devstack should set up Keystone with HTTPD Important Links First - launchpad - all the open source contributions basically revolve around a launchpad ID. * launchpad: https://launchpad.net * the keystone project: https://launchpad.net/keystone * the blueprints (planned feature requests for keystone): https://blueprints.launchpad.net/keystone * Overview of how to get involved and many of these tools * general to any openstack project: http://wiki.openstack.org/HowToContribute * Core reviews using reviewboard (authenticated with OAuth through Launchpad) * code reviews going into keystone: https://review.openstack.org/#/q/status:open+keystone,n,z * code reviews for the V3 keystone (openstack specific) API: https://review.openstack.org/#/q/status:open+identity,n,z * Source Code * keystone: https://github.com/openstack/keystone * the python client for keystone: https://github.com/openstack/python-keystoneclient * Documentation * developer documentation (generated from keystone source code): http://docs.openstack.org/developer/keystone/ * holistic documentation for openstack (keystone and more): http://docs.openstack.org * running openstack (keystone and more) on a single machine * (used in OpenStack's CI efforts and for development/test) * http://devstack.org I mentioned that Keystone's V3 API is focused on providing services to other openstack components. The API relevant for writing plugins (python, classes) is subclassing one of the drivers, such as "identity" - https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L63. The conversations around the design and implementation of Federation upcoming are happening actively on the openstack-dev mailing list. For a reasonable web interface to view and search previous messages and conversations around this: * http://markmail.org/search/?q=openstack-dev%20keystone * more specific to federation discussions: http://markmail.org/search/?q=openstack-dev+keystone+federation lists can be subscribed to at http://lists.openstack.org/cgi-bin/mailman/listinfo The major actors in Keystone today are all involved on this mailing list and keep touch weekly during the IRC meetings. The Keystone IRC meetings are held weekly - tuesdays at 1800UTC. We keep an agenda and previous discussion minutes available on the OpenStack wiki at http://wiki.openstack.org/Meetings/KeystoneMeeting Older Items F17CVE-2012-5483 https://admin.fedoraproject.org/updates/openstack-keystone-2012.1.3-3.fc17 * Significant Refactoring effort that needs to finish prior to trust work * https://review.openstack.org/#/c/17782/ * Just merged, took a lot of code review back and forth * Ran the test coverage tool to identify areas that are untested * http://admiyo.fedorapeople.org/openstack/covhtml/ * V3 API * IdM as service catalog entries * Attribute Mapping (Kristy Siu, Kent.ac.uk) (not much happened here over the holidays) * https://review.openstack.org/#/c/18280/1 Tunables for QA: * Databases: SQLite, MySQL, PostgreSQL * Identity: can also use LDAP and PAM * Memcached or KVS Backends should not be recommended for deployment or supported * Token Type * *UUID* * PKI * Need to test multiple servers w/ load balancer in front of it * Web Server: Eventlet or HTTPD * With HTTPD can use remote authentication: * Kerberos, * Basic Auth, and * X509 Client cert should all be tested. * Groups(henrynash) * https://blueprints.launchpad.net/openstack/?searchtext=user-groups * Just merged into Repo: * https://review.openstack.org/#/c/18097/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Jan 15 16:09:46 2013 From: ayoung at redhat.com (Adam Young) Date: Tue, 15 Jan 2013 11:09:46 -0500 Subject: [rhos-list] Openstack Keystone Status Jan 14, 2013 In-Reply-To: <50F46BF8.6030007@redhat.com> References: <50F46BF8.6030007@redhat.com> Message-ID: <50F57F4A.2050205@redhat.com> Apologies for the internal URLs on this, as It is a cut and past of a page I am using to try and link all of our resources regarding Keystone together. The one URL that might cause some interest is on the RHIdM and Open Stack integration paper. This is very much a brainstorming document for a talk that has been proposed for the Red Hat Summit. Treat it as a teaser for the talk: if you want to know more, sign up for the RH summit and hope that the talk gets selected. I'll try to provide a more concise summary in the future. The short of it is that we are in the middle of a development push, and a lot of things are in flux. The driving goals are to make Keystone as solid as possible, and to provide an Identity Management solution in in Open Stack that ties in with the rest of the organizations deploying Open Stack. The main themes are better support for:cryptography, LDAP, grouping of users, and delegation of authority. On 01/14/2013 03:35 PM, Adam Young wrote: > Current status For Red Hat Open Stack Keystone as of Jan 14, 2012 > maintained Here: http://openstack.etherpad.corp.redhat.com/keystone > > Keystone Upstream Core Devs: > > Joe Heck ( will be stepping down as PTL). > Dolph Matthews poised to take PTL > Henry Nash (IBM) > Guang Yee (HP) > Adam Young > > Things are looking to move faster with 2 new core devs. THey hyave > been both active in code reviews. > > Not Core but Active: > David Chadwick (Univ. of Kent) > Kristy Sui (Univ. of Kent) > > Brad Topol and K. Sahdev from IBM are going to start on LDAP work, > to include Backlog item of supporting LDAP in Devstack > > Current Development: G-2 interim release out last week. > > > * Trusts (ayoung) Have been posted as a Work In Progress. Won't be > in G-2 > > * https://review.openstack.org/#/c/18973/ > > * http://wiki.openstack.org/Keystone/Trusts > > * https://blueprints.launchpad.net/keystone/+spec/trusts > > * https://bugzilla.redhat.com/show_bug.cgi?id=894925 > > * Defining Proejct membership to mean role assignment: > > * Discovered as an Issue with the V3 API > > * https://blueprints.launchpad.net/keystone/+spec/replace-tenant-user-membership > > * Trusts dependant on implementing > > * Scoping a token to a Domain > > * https://blueprints.launchpad.net/openstack/?searchtext=domain-scoping > > * https://review.openstack.org/#/c/18770/ > > * This needs to be followed with "Scoping a token to an Endpoint" > > * Discussion about whether to allow a token scoped to multiple projects > > * My view: should be allowed, but not the norm, and used only for > use cases invloving transferring resources between projects. > > * Would change auth_token behaviour if allowed. > > * Test Keystone againstLive SQL Posted for a review > > * https://review.openstack.org/#/c/18519/ > > * This is only for SQL Upgrade tests > > * going to require additional work for the real Unit tests due to > how DB schema is managed > > * Enhance wsgi to listen on ipv6 address > > * https://review.openstack.org/#/c/19400/ > > * Better SSL support > > * https://review.openstack.org/#/c/19562/ > > * Limit the size of HTTP requests. > > * https://review.openstack.org/#/c/19567/1 > > * Stable: Render content-type appropriate 404 (bug 1089987) > > * Needs stable reviewers > > * https://review.openstack.org/#/c/18049/ > > > > Some discussion about doing things via User names and Project names. > All have identitified that it would be preferable, but we need to make > sure names are URL ready. > > Keystoneupstreamteam meeting (follows immediatly after RH OS Team meeting) > > * Weekly - Tuesdays at 1800 UTC for ~45 minutes > > * IRC channel: #openstack-meeting > > * Chair (to contact for more information): Joseph Heck > > * Agenda http://wiki.openstack.org/Meetings/KeystoneMeeting > > > > Red Hat Open Stack status > > Responded to Call for Papers with a FreeIPA/Open Stack integration > proposal > Summit talk: http://etherpad.corp.redhat.com/IdMOpenStack > > RH Members: > > * Adam Young https://home.corp.redhat.com/user/ayoung > > * Alan Pevec https://home.corp.redhat.com/user/apevec > > > Potential Members: > > * Kurt Seifried https://home.corp.redhat.com/user/kseifrie > > * Russell Bryant (Security Response) > https://home.corp.redhat.com/user/rbryant > > * QA? > > * IdM team member? > > > Recruiting Status: > > * Planning on attending the Job Fairs at WPI and RPI > > * Discussed hiring in Brno with assisstance of Dmitri's team > > > Fedora Status (Package versions, dependnecies and issues etc) > > * Raw Hide has Grizzly-2 openstack-keystone-2013.1-0.2.g2.fc19 > > * el6-grizzly side-repo > http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/ > > > stable/folsom update 1(no change from Jan 8): > > * F18 > https://admin.fedoraproject.org/updates/openstack-keystone-2012.2.1-1.fc18 > > * EPELhttps://admin.fedoraproject.org/updates/openstack-keystone-2012.2.1-1.el6 > > * RHOS https://errata.devel.redhat.com/advisory/14265 > > > RH QA Status > > > > > Backlog: > devstack should set up Keystone with HTTPD > > Important Links > > First - launchpad - all the open source contributions basically > revolve around a launchpad ID. > * launchpad: https://launchpad.net > * the keystone project: https://launchpad.net/keystone > * the blueprints (planned feature requests for keystone): > https://blueprints.launchpad.net/keystone > * Overview of how to get involved and many of these tools > * general to any openstack project: > http://wiki.openstack.org/HowToContribute > * Core reviews using reviewboard (authenticated with OAuth through > Launchpad) > * code reviews going into keystone: > https://review.openstack.org/#/q/status:open+keystone,n,z > * code reviews for the V3 keystone (openstack specific) API: > https://review.openstack.org/#/q/status:open+identity,n,z > * Source Code > * keystone: https://github.com/openstack/keystone > * the python client for keystone: > https://github.com/openstack/python-keystoneclient > * Documentation > * developer documentation (generated from keystone source code): > http://docs.openstack.org/developer/keystone/ > * holistic documentation for openstack (keystone and more): > http://docs.openstack.org > * running openstack (keystone and more) on a single machine > * (used in OpenStack's CI efforts and for development/test) > * http://devstack.org > > I mentioned that Keystone's V3 API is focused on providing services > to other openstack components. The API relevant for writing plugins > (python, classes) is subclassing one of the drivers, such as > "identity" - > https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L63. > > The conversations around the design and implementation of Federation > upcoming are happening actively on the openstack-dev mailing list. For > a reasonable web interface to view and search previous messages and > conversations around this: > * http://markmail.org/search/?q=openstack-dev%20keystone > * more specific to federation discussions: > http://markmail.org/search/?q=openstack-dev+keystone+federation > > lists can be subscribed to at > http://lists.openstack.org/cgi-bin/mailman/listinfo > > The major actors in Keystone today are all involved on this mailing > list and keep touch weekly during the IRC meetings. > > The Keystone IRC meetings are held weekly - tuesdays at 1800UTC. We > keep an agenda and previous discussion minutes available on the > OpenStack wiki at http://wiki.openstack.org/Meetings/KeystoneMeeting > > > > Older Items > > F17CVE-2012-5483 > https://admin.fedoraproject.org/updates/openstack-keystone-2012.1.3-3.fc17 > > * Significant Refactoring effort that needs to finish prior to trust > work > > * https://review.openstack.org/#/c/17782/ > > * Just merged, took a lot of code review back and forth > > * Ran the test coverage tool to identify areas that are untested > > * http://admiyo.fedorapeople.org/openstack/covhtml/ > > * V3 API > > * IdM as service catalog entries > > * Attribute Mapping (Kristy Siu, Kent.ac.uk) (not much happened > here over the holidays) > > * https://review.openstack.org/#/c/18280/1 > > Tunables for QA: > > * Databases: SQLite, MySQL, PostgreSQL > > * Identity: can also use LDAP and PAM > > * Memcached or KVS Backends should not be recommended for deployment > or supported > > * Token Type > > * *UUID* > > * PKI > > * Need to test multiple servers w/ load balancer in front of it > > * Web Server: Eventlet or HTTPD > > * With HTTPD can use remote authentication: > > * Kerberos, > > * Basic Auth, and > > * X509 Client cert should all be tested. > > * Groups(henrynash) > > * https://blueprints.launchpad.net/openstack/?searchtext=user-groups > > * Just merged into Repo: > > * https://review.openstack.org/#/c/18097/ > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From anu.bhaskar.babu at accenture.com Wed Jan 16 07:41:21 2013 From: anu.bhaskar.babu at accenture.com (anu.bhaskar.babu at accenture.com) Date: Wed, 16 Jan 2013 07:41:21 +0000 Subject: [rhos-list] Running openshift on openstack Message-ID: Hi, Is there any working documentation for installing and running Openshift Origin on Redhat Openstack. -- Regards, Anu Bhaskar Accenture Technology Consulting Tel: +91.80.431.56585 This message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com From pmyers at redhat.com Wed Jan 16 13:42:35 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 16 Jan 2013 08:42:35 -0500 Subject: [rhos-list] Running openshift on openstack In-Reply-To: References: Message-ID: <50F6AE4B.5040302@redhat.com> On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: > Hi, > > Is there any working documentation for installing and running Openshift Origin on Redhat Openstack. Matt, does your team have any docs around this? Cheers, Perry From shardy at redhat.com Wed Jan 16 14:45:56 2013 From: shardy at redhat.com (Steven Hardy) Date: Wed, 16 Jan 2013 14:45:56 +0000 Subject: [rhos-list] Running openshift on openstack In-Reply-To: References: Message-ID: <20130116144555.GB2554@heatlt.redhat.com> On Wed, Jan 16, 2013 at 07:41:21AM +0000, anu.bhaskar.babu at accenture.com wrote: > Hi, > > Is there any working documentation for installing and running Openshift Origin on Redhat Openstack. Last time I looked at this, I found the openshift community wiki documentation to be out of date, and I failed to get a working openshift installation. The wiki docs are still out of date now AFAICS. You might find this summary of my alternate demo solution useful: http://wiki.openstack.org/Heat/Running-openshift Basically I modified the openshift origin liveinst to allow it to launch via a heat template - the exact same approach could be used to prepare a demo image which could be used direct with nova on RHOS (skip the cfntools bit) Obviously this is only useful for demo/PoC purposes, but perhaps may be useful to you. -- Steve Hardy Red Hat Engineering, Cloud From mhicks at redhat.com Wed Jan 16 20:51:12 2013 From: mhicks at redhat.com (Matt Hicks) Date: Wed, 16 Jan 2013 15:51:12 -0500 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F6AE4B.5040302@redhat.com> References: <50F6AE4B.5040302@redhat.com> Message-ID: <50F712C0.8000307@redhat.com> On 01/16/2013 08:42 AM, Perry Myers wrote: > On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: >> Hi, >> >> Is there any working documentation for installing and running Openshift Origin on Redhat Openstack. > Matt, does your team have any docs around this? > > Cheers, > > Perry Looping in Krishna and Bill. Do you guys have anything OpenStack focused out there? -Matt From kraman at redhat.com Wed Jan 16 20:52:42 2013 From: kraman at redhat.com (Krishna Raman) Date: Wed, 16 Jan 2013 12:52:42 -0800 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F712C0.8000307@redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> Message-ID: <8C0F3912-9B96-4A48-AF07-1D6D9C86AC98@redhat.com> Not yet. But I plan to have something by the end of Jan. --kr On Jan 16, 2013, at 12:51 PM, Matt Hicks wrote: > On 01/16/2013 08:42 AM, Perry Myers wrote: >> On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: >>> Hi, >>> >>> Is there any working documentation for installing and running Openshift Origin on Redhat Openstack. >> Matt, does your team have any docs around this? >> >> Cheers, >> >> Perry > Looping in Krishna and Bill. Do you guys have anything OpenStack focused out there? > > -Matt From sdake at redhat.com Wed Jan 16 22:28:23 2013 From: sdake at redhat.com (Steven Dake) Date: Wed, 16 Jan 2013 15:28:23 -0700 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F712C0.8000307@redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> Message-ID: <50F72987.7030907@redhat.com> On 01/16/2013 01:51 PM, Matt Hicks wrote: > On 01/16/2013 08:42 AM, Perry Myers wrote: >> On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: >>> Hi, >>> >>> Is there any working documentation for installing and running >>> Openshift Origin on Redhat Openstack. >> Matt, does your team have any docs around this? >> >> Cheers, >> >> Perry > Looping in Krishna and Bill. Do you guys have anything OpenStack > focused out there? > > -Matt > Matt, We have done a bit of work on making heat launch OpenShift on OpenStack. See: http://wiki.openstack.org/Heat/Running-openshift This can only improve with time, including autoscaling support and integration with Moniker (DNS as a service) as well as helping us understand networking points outlined elsewhere. Unfortunately we have difficulty keeping the integration working consistently because OpenShift is moving so quickly - especially the tooling around building RPMs. We have gone through many different approaches to get OpenShift working on OpenStack (outlined at the wiki page above). What would help the Heat team tremendously is a package repo for OpenShift built against the F18 toolchain which are tested, stable, and only change after having gone through some rudimentary QE (vs a per-commit rpm repo which is less then ideal). This would allow us to always deploy a known working OpenShift against a known working Fedora baseline. I expect anyone else doing this work will have a similar requirement if they have spent similar engineering hours on this problem as the heat team has. We have tried the repo packages in the past and they were built on two different Fedora versions, requiring two different major versions of ruby. I understand the problems with putting OpenShift in Fedora (toolchain changes right before deadlines), and am hopeful we won't have to wait until F19 to make this integration point a reality. Regards -steve > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From mhicks at redhat.com Wed Jan 16 22:44:50 2013 From: mhicks at redhat.com (Matt Hicks) Date: Wed, 16 Jan 2013 17:44:50 -0500 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F72987.7030907@redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> <50F72987.7030907@redhat.com> Message-ID: <50F72D62.6040405@redhat.com> On 01/16/2013 05:28 PM, Steven Dake wrote: > On 01/16/2013 01:51 PM, Matt Hicks wrote: >> On 01/16/2013 08:42 AM, Perry Myers wrote: >>> On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: >>>> Hi, >>>> >>>> Is there any working documentation for installing and running >>>> Openshift Origin on Redhat Openstack. >>> Matt, does your team have any docs around this? >>> >>> Cheers, >>> >>> Perry >> Looping in Krishna and Bill. Do you guys have anything OpenStack >> focused out there? >> >> -Matt >> > Matt, > > We have done a bit of work on making heat launch OpenShift on > OpenStack. See: http://wiki.openstack.org/Heat/Running-openshift > > This can only improve with time, including autoscaling support and > integration with Moniker (DNS as a service) as well as helping us > understand networking points outlined elsewhere. > > Unfortunately we have difficulty keeping the integration working > consistently because OpenShift is moving so quickly - especially the > tooling around building RPMs. We have gone through many different > approaches to get OpenShift working on OpenStack (outlined at the wiki > page above). What would help the Heat team tremendously is a package > repo for OpenShift built against the F18 toolchain which are tested, > stable, and only change after having gone through some rudimentary QE > (vs a per-commit rpm repo which is less then ideal). This would allow > us to always deploy a known working OpenShift against a known working > Fedora baseline. I expect anyone else doing this work will have a > similar requirement if they have spent similar engineering hours on > this problem as the heat team has. We have tried the repo packages in > the past and they were built on two different Fedora versions, > requiring two different major versions of ruby. I understand the > problems with putting OpenShift in Fedora (toolchain changes right > before deadlines), and am hopeful we won't have to wait until F19 to > make this integration point a reality. > > Regards > -steve > >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > We will absolutely get this for you. We're working with Seth Vidal on the coprs work to see if we can establish a better process of building against Fedora. Krishna has also been testing against F18 and we've got a couple of systemd bugs to run down. I'll sync up with the guys here and see if we can get you guys some dates. OpenShift and OpenStack integration is a critical priority for us next year and we really, really appreciate the work you guys have done to date. If you are interested, I also have some Amazon CloudFormations templates that I used to deploy a multi-tier OpenShift Enterprise at Amazon re:Invent. Testing them out with Heat has been on my todo list forever now but I haven't been able to get to it. I'm happy to share them if you think it might help with anything. -Matt From sdake at redhat.com Wed Jan 16 22:53:52 2013 From: sdake at redhat.com (Steven Dake) Date: Wed, 16 Jan 2013 15:53:52 -0700 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F72D62.6040405@redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> <50F72987.7030907@redhat.com> <50F72D62.6040405@redhat.com> Message-ID: <50F72F80.7040709@redhat.com> On 01/16/2013 03:44 PM, Matt Hicks wrote: > On 01/16/2013 05:28 PM, Steven Dake wrote: >> On 01/16/2013 01:51 PM, Matt Hicks wrote: >>> On 01/16/2013 08:42 AM, Perry Myers wrote: >>>> On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: >>>>> Hi, >>>>> >>>>> Is there any working documentation for installing and running >>>>> Openshift Origin on Redhat Openstack. >>>> Matt, does your team have any docs around this? >>>> >>>> Cheers, >>>> >>>> Perry >>> Looping in Krishna and Bill. Do you guys have anything OpenStack >>> focused out there? >>> >>> -Matt >>> >> Matt, >> >> We have done a bit of work on making heat launch OpenShift on >> OpenStack. See: http://wiki.openstack.org/Heat/Running-openshift >> >> This can only improve with time, including autoscaling support and >> integration with Moniker (DNS as a service) as well as helping us >> understand networking points outlined elsewhere. >> >> Unfortunately we have difficulty keeping the integration working >> consistently because OpenShift is moving so quickly - especially the >> tooling around building RPMs. We have gone through many different >> approaches to get OpenShift working on OpenStack (outlined at the >> wiki page above). What would help the Heat team tremendously is a >> package repo for OpenShift built against the F18 toolchain which are >> tested, stable, and only change after having gone through some >> rudimentary QE (vs a per-commit rpm repo which is less then ideal). >> This would allow us to always deploy a known working OpenShift >> against a known working Fedora baseline. I expect anyone else doing >> this work will have a similar requirement if they have spent similar >> engineering hours on this problem as the heat team has. We have >> tried the repo packages in the past and they were built on two >> different Fedora versions, requiring two different major versions of >> ruby. I understand the problems with putting OpenShift in Fedora >> (toolchain changes right before deadlines), and am hopeful we won't >> have to wait until F19 to make this integration point a reality. >> >> Regards >> -steve >> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >> > We will absolutely get this for you. We're working with Seth Vidal on > the coprs work to see if we can establish a better process of building > against Fedora. Krishna has also been testing against F18 and we've > got a couple of systemd bugs to run down. I'll sync up with the guys > here and see if we can get you guys some dates. OpenShift and > OpenStack integration is a critical priority for us next year and we > really, really appreciate the work you guys have done to date. If you > are interested, I also have some Amazon CloudFormations templates that > I used to deploy a multi-tier OpenShift Enterprise at Amazon > re:Invent. Testing them out with Heat has been on my todo list > forever now but I haven't been able to get to it. I'm happy to share > them if you think it might help with anything. > Great News! We would be happy to have a look at your templates to help identify gaps in our implementation. We are in our final push for grizzly milestone 3 (which finishes Feb 21) so we may be a bit sporadic in responding until then. Steven Hardy (added to cc) has been doing most of the OpenShift bringup so please CC him on templates you have available. Regards -steve > -Matt From vaibhav.k.agarwal at in.com Thu Jan 17 12:24:15 2013 From: vaibhav.k.agarwal at in.com (Kumar Vaibhav) Date: Thu, 17 Jan 2013 17:54:15 +0530 Subject: [rhos-list] Bridge names in Openstack quantum Message-ID: <1358425455.9219adc5c42107c4911e249155320648@mail.in.com> Hi,I am having problem in using the Quantum in on a single flat network.I am using it in Linux bridge mode. My compute node is having only one network. On which I pre created a bridge br10. When I instantiated a machine I compute node become inaccesible. I checked the logs and found that it has changed my bridge configuration.It has created a new bridge. with name of br This seems to be little cryptic in identification. Is there any way I can define the existing Bridge name to be used for a particular Network ID?Currently Quantum checks if the Bridge exists it just creates a TUN device. So If I pre create the Bridge then it will not change itRegardsVaibhavDear rhoslist ! Get Yourself a cool, short @in.com Email ID now! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhicks at redhat.com Thu Jan 17 13:38:38 2013 From: mhicks at redhat.com (Matt Hicks) Date: Thu, 17 Jan 2013 08:38:38 -0500 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F72F80.7040709@redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> <50F72987.7030907@redhat.com> <50F72D62.6040405@redhat.com> <50F72F80.7040709@redhat.com> Message-ID: <50F7FEDE.3080708@redhat.com> On 01/16/2013 05:53 PM, Steven Dake wrote: > On 01/16/2013 03:44 PM, Matt Hicks wrote: >> On 01/16/2013 05:28 PM, Steven Dake wrote: >>> On 01/16/2013 01:51 PM, Matt Hicks wrote: >>>> On 01/16/2013 08:42 AM, Perry Myers wrote: >>>>> On 01/16/2013 02:41 AM, anu.bhaskar.babu at accenture.com wrote: >>>>>> Hi, >>>>>> >>>>>> Is there any working documentation for installing and running >>>>>> Openshift Origin on Redhat Openstack. >>>>> Matt, does your team have any docs around this? >>>>> >>>>> Cheers, >>>>> >>>>> Perry >>>> Looping in Krishna and Bill. Do you guys have anything OpenStack >>>> focused out there? >>>> >>>> -Matt >>>> >>> Matt, >>> >>> We have done a bit of work on making heat launch OpenShift on >>> OpenStack. See: http://wiki.openstack.org/Heat/Running-openshift >>> >>> This can only improve with time, including autoscaling support and >>> integration with Moniker (DNS as a service) as well as helping us >>> understand networking points outlined elsewhere. >>> >>> Unfortunately we have difficulty keeping the integration working >>> consistently because OpenShift is moving so quickly - especially the >>> tooling around building RPMs. We have gone through many different >>> approaches to get OpenShift working on OpenStack (outlined at the >>> wiki page above). What would help the Heat team tremendously is a >>> package repo for OpenShift built against the F18 toolchain which are >>> tested, stable, and only change after having gone through some >>> rudimentary QE (vs a per-commit rpm repo which is less then ideal). >>> This would allow us to always deploy a known working OpenShift >>> against a known working Fedora baseline. I expect anyone else doing >>> this work will have a similar requirement if they have spent similar >>> engineering hours on this problem as the heat team has. We have >>> tried the repo packages in the past and they were built on two >>> different Fedora versions, requiring two different major versions of >>> ruby. I understand the problems with putting OpenShift in Fedora >>> (toolchain changes right before deadlines), and am hopeful we won't >>> have to wait until F19 to make this integration point a reality. >>> >>> Regards >>> -steve >>> >>>> _______________________________________________ >>>> rhos-list mailing list >>>> rhos-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rhos-list >>> >> We will absolutely get this for you. We're working with Seth Vidal >> on the coprs work to see if we can establish a better process of >> building against Fedora. Krishna has also been testing against F18 >> and we've got a couple of systemd bugs to run down. I'll sync up with >> the guys here and see if we can get you guys some dates. OpenShift >> and OpenStack integration is a critical priority for us next year and >> we really, really appreciate the work you guys have done to date. If >> you are interested, I also have some Amazon CloudFormations templates >> that I used to deploy a multi-tier OpenShift Enterprise at Amazon >> re:Invent. Testing them out with Heat has been on my todo list >> forever now but I haven't been able to get to it. I'm happy to share >> them if you think it might help with anything. >> > > Great News! > > We would be happy to have a look at your templates to help identify > gaps in our implementation. We are in our final push for grizzly > milestone 3 (which finishes Feb 21) so we may be a bit sporadic in > responding until then. Steven Hardy (added to cc) has been doing most > of the OpenShift bringup so please CC him on templates you have > available. > > Regards > -steve Steven and anyone else who is interested, could you send me your GitHub id's so I can add you to the repository? Right now the script is in a private repository because it's with all the other OpenShift Enterprise installation components but there is nothing sensitive about the CloudFormations templates and we can certainly pull it out. I just figured the best way to look at it initially would be with the kickstart scripts and some of the docs all together and if you think it's helpful we can carve them out and open source them in whatever way works best. >> -Matt > From shardy at redhat.com Thu Jan 17 14:05:21 2013 From: shardy at redhat.com (Steven Hardy) Date: Thu, 17 Jan 2013 14:05:21 +0000 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <50F7FEDE.3080708@redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> <50F72987.7030907@redhat.com> <50F72D62.6040405@redhat.com> <50F72F80.7040709@redhat.com> <50F7FEDE.3080708@redhat.com> Message-ID: <20130117140520.GA28172@heatlt.redhat.com> Hi Matt, On Thu, Jan 17, 2013 at 08:38:38AM -0500, Matt Hicks wrote: > Steven and anyone else who is interested, could you send me your > GitHub id's so I can add you to the repository? Right now the > script is in a private repository because it's with all the other > OpenShift Enterprise installation components but there is nothing > sensitive about the CloudFormations templates and we can certainly > pull it out. I just figured the best way to look at it initially > would be with the kickstart scripts and some of the docs all > together and if you think it's helpful we can carve them out and > open source them in whatever way works best. > >>-Matt Thanks, my github id is "hardys" : https://github.com/hardys/ I previously got access to your (now moved) enterprise-install repo, so I think I'm already a member of the openshift organization - so if you can just point me to the new location of the Cloudformation template that would be great. Last time I looked at the template, I noticed you're making use of AWS::Route53 resources, which we don't yet support in heat, so the first task for us will be to work out how to remove those and still install/configure things correctly for test purposes. The other question I have is regarding the AMI you use - can you provide any details of this - e.g what version of RHEL (or ideally Fedora) do we need? Also in the openshift-amz.sh script which you wget into the template, is it OK to use the default repos_base, which appears to be a nightly build from a couple of months ago, or is there some other repo we should use instead? Thanks for any info you can provide - will be great if we can get a better openshift-on-openstack (orchestrated by heat) demo capability sorted out! Steve From mhicks at redhat.com Thu Jan 17 14:19:04 2013 From: mhicks at redhat.com (Matt Hicks) Date: Thu, 17 Jan 2013 09:19:04 -0500 Subject: [rhos-list] Running openshift on openstack In-Reply-To: <20130117140520.GA28172@heatlt.redhat.com> References: <50F6AE4B.5040302@redhat.com> <50F712C0.8000307@redhat.com> <50F72987.7030907@redhat.com> <50F72D62.6040405@redhat.com> <50F72F80.7040709@redhat.com> <50F7FEDE.3080708@redhat.com> <20130117140520.GA28172@heatlt.redhat.com> Message-ID: <50F80858.6020203@redhat.com> On 01/17/2013 09:05 AM, Steven Hardy wrote: > Hi Matt, > > On Thu, Jan 17, 2013 at 08:38:38AM -0500, Matt Hicks wrote: > >> Steven and anyone else who is interested, could you send me your >> GitHub id's so I can add you to the repository? Right now the >> script is in a private repository because it's with all the other >> OpenShift Enterprise installation components but there is nothing >> sensitive about the CloudFormations templates and we can certainly >> pull it out. I just figured the best way to look at it initially >> would be with the kickstart scripts and some of the docs all >> together and if you think it's helpful we can carve them out and >> open source them in whatever way works best. >>>> -Matt > Thanks, my github id is "hardys" : https://github.com/hardys/ > > I previously got access to your (now moved) enterprise-install repo, so I > think I'm already a member of the openshift organization - so if you can > just point me to the new location of the Cloudformation template that would > be great. Everything should be in here - https://github.com/openshift/enterprise/tree/enterprise-1.0/install-scripts/amazon I noticed bleanhar has done the last few updates so he's probably the leading expert there now too. > > Last time I looked at the template, I noticed you're making use of > AWS::Route53 resources, which we don't yet support in heat, so the first > task for us will be to work out how to remove those and still > install/configure things correctly for test purposes. Yep, that will probably be the trickiest piece for us. I'm still thinking about the best way to bring multiple machines up that all know about each other without fixed DNS addresses but I'm sure there is a way. The main reason we use DNS is so that we can provisioning everything in 'parallel' (even though CloudFormations really doesn't) and when the machines come online, they already know how to connect to the other machines. For example, when the broker is provisioned (broker.example.com), we configure it's connection to mongo by expecting the mongo machine to be at mongo.example.com. We'll just have a find a different way to do that. > > The other question I have is regarding the AMI you use - can you provide any > details of this - e.g what version of RHEL (or ideally Fedora) do we need? I did all of my testing with the latest RHEL and I essentially added cloud-init to it and burned my own AMI. That was key to how we kickstarted the machines. I believe Origin works on F17 right now but we have a known issue with F18. Krishna - any updates there? > > Also in the openshift-amz.sh script which you wget into the template, is it > OK to use the default repos_base, which appears to be a nightly build from a > couple of months ago, or is there some other repo we should use instead? We should eventually start using the coprs repository (3rd party repositories built through Koji) but it's not there yet. In the meantime though, we might need to publish a temporary location since the yum repositories referenced in those are stale. Krishna / Bill - any ideas here? > > Thanks for any info you can provide - will be great if we can get a better > openshift-on-openstack (orchestrated by heat) demo capability sorted out! > > Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhern at us.ibm.com Mon Jan 21 20:47:07 2013 From: dhern at us.ibm.com (David Hernandez) Date: Mon, 21 Jan 2013 14:47:07 -0600 Subject: [rhos-list] Question about querying metadata for object name. Message-ID: Hello. Is there a tool available that will parse the extended attributes of a file in a storage partition for it's object name? It appears that Swift stores any user assigned names in an attribute called user.swift.metadata. This attribute also appears to contain binary data. I'm trying to find a way to identity an object directly in a storage partition, outside of Swift, by looking inside the file's metadata. Regards. David Hernandez Contractor / HPSS IBM Global Business Services - US Federal 12301 Kurland Dr Suite 300 Houston, TX 77034-4812 Mobile 713-444-5755 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbrunell at redhat.com Fri Jan 25 15:59:09 2013 From: tbrunell at redhat.com (Ted Brunell) Date: Fri, 25 Jan 2013 10:59:09 -0500 (EST) Subject: [rhos-list] Capacity Planning In-Reply-To: <587515991.6482097.1359128382350.JavaMail.root@redhat.com> Message-ID: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> Are there guides available to assist in the capacity planning for an OpenStack installation? I'm looking for something that will say what the best breakout for the nova-services is based on either the number of nova-compute instances or the number of VM instances running in OpenStack. For example, is there an indicator of when it is best to run the messaging server on its own physical server? Is that indicator typically met when there are 10 compute-nodes, or 1000 VM instances? Thanks, Ted Brunell From rbryant at redhat.com Fri Jan 25 16:49:17 2013 From: rbryant at redhat.com (Russell Bryant) Date: Fri, 25 Jan 2013 11:49:17 -0500 Subject: [rhos-list] Capacity Planning In-Reply-To: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> Message-ID: <5102B78D.60908@redhat.com> On 01/25/2013 10:59 AM, Ted Brunell wrote: > Are there guides available to assist in the capacity planning for an OpenStack installation? I'm looking for something that will say what the best breakout for the nova-services is based on either the number of nova-compute instances or the number of VM instances running in OpenStack. For example, is there an indicator of when it is best to run the messaging server on its own physical server? Is that indicator typically met when there are 10 compute-nodes, or 1000 VM instances? I'm not aware of any guides in this area. Perhaps others on the list with larger deployments can speak up. It's a hard question to answer because I don't think there's going to be a hard rule on it. For example, the way the deployment is used will play a big part in it. A deployment with VMs that have a much longer lifetime will have a lower load on the supporting services than one that is rapidly spinning up and tearing down VMs all the time. I tend to see more discussion about splitting services out and running more of them for the purposes of high availability in larger deployments before doing so because of the load being too high. I guess I'm saying that there's a good chance you would introduce additional capacity for HA before you need it because of load issues. -- Russell Bryant From dhkarimi at sei.cmu.edu Fri Jan 25 18:19:11 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Fri, 25 Jan 2013 18:19:11 +0000 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster Message-ID: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> Hi, My organization has a satellite RHN server. Our RHEL 6.2 cluster does not have direct internet access. How can I get the OpenStack essex preview to work in this environment? --Derrick H. Karimi --Software Developer, SEI Innovation Center --Carnegie Mellon University -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Fri Jan 25 18:27:01 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 25 Jan 2013 13:27:01 -0500 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> Message-ID: <5102CE75.7080102@redhat.com> On 01/25/2013 01:19 PM, Derrick H. Karimi wrote: > Hi, > > My organization has a satellite RHN server. Our RHEL 6.2 cluster > does not have direct internet access. How can I get the OpenStack essex > preview to work in this environment? Hmm... I would assume it's possible to have a satellite server mirror from an eval channel and that should solve your issue. But I'm not an expert on RHN/Satellite so I've cc'd a few folks who are Cliff/Todd, any thoughts? As an aside, I happen to be based in Pittsburgh (I see that you're from CMU) :) Perry From tsanders at redhat.com Fri Jan 25 19:06:43 2013 From: tsanders at redhat.com (Todd Sanders) Date: Fri, 25 Jan 2013 14:06:43 -0500 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <5102CE75.7080102@redhat.com> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> Message-ID: <5102D7C3.2030503@redhat.com> On 01/25/2013 01:27 PM, Perry Myers wrote: > On 01/25/2013 01:19 PM, Derrick H. Karimi wrote: >> Hi, >> >> My organization has a satellite RHN server. Our RHEL 6.2 cluster >> does not have direct internet access. How can I get the OpenStack essex >> preview to work in this environment? > Hmm... I would assume it's possible to have a satellite server mirror > from an eval channel and that should solve your issue. But I'm not an > expert on RHN/Satellite so I've cc'd a few folks who are > > Cliff/Todd, any thoughts? > > As an aside, I happen to be based in Pittsburgh (I see that you're from > CMU) :) > > Perry Absolutely, Satellite is certainly capable of mirroring the RSOS eval channels for folsom and essex. In order for that to happen, you would need to request the RHOS eval from the same account that contains your existing Satellite subscription, request and install an updated Satellite Certificate, and then sync the channels. -Todd From prmarino1 at gmail.com Fri Jan 25 19:25:10 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 25 Jan 2013 14:25:10 -0500 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <5102CE75.7080102@redhat.com> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> Message-ID: Derrick Satellite can do it easily be used to deploy it and manage the configuration files, but be warned Satellite still doesn't have full integration with OpenStack yet nor does its upstream project Spacewalk. so you wont be able to effectively use it to manage the OpenStack cluster itself. what that means is you don't want to use the Satellite built in VM provisioning because it will create them without OpenStacks knowledge, also there is no Glance, or Cinder integration in Satellite yet either. I have been testing with spacewalk and would gladly share the information Ive put together so far on the subject. Ive work at large companies that used Satellite in the past and am currently using Spacewalk and have been actively involved with the Spacewalk community. Based on my experience I can tell you this both are great pieces of software and I would go so far to say they are really required for any enterprise environment with more than 100 servers. That being said Spacewalk cant directly sync from RHN although people have done it effectively through middle ware its not officially supported, so if you aren't willing to put the effort in to get bleeding edge software to work or your business requirements force you to get support Satellite is the right way to go. if you don't want to pay for Satellite or go through the effort of setting up a spacewalk server you have 3 other options. mrepo is just a simple repository mirror that can mirror RHN channels Pulp can also mirror RHN and can do kickstarts but is more complicated than mrepo Finally there is the option of using SAM should be able to act as a RHN as well effective proxy https://access.redhat.com/knowledge/docs/Red_Hat_Subscription_Asset_Manager/ . I be leave its already included with the standard RHEL support license. Note I'm not sure about SAM's capabilities with OpenStack, also I wouldn't suggest it for environment that are mission critical or require change control because it wasn't really designed for it the way Satellite was. On Fri, Jan 25, 2013 at 1:27 PM, Perry Myers wrote: > On 01/25/2013 01:19 PM, Derrick H. Karimi wrote: >> Hi, >> >> My organization has a satellite RHN server. Our RHEL 6.2 cluster >> does not have direct internet access. How can I get the OpenStack essex >> preview to work in this environment? > > Hmm... I would assume it's possible to have a satellite server mirror > from an eval channel and that should solve your issue. But I'm not an > expert on RHN/Satellite so I've cc'd a few folks who are > > Cliff/Todd, any thoughts? > > As an aside, I happen to be based in Pittsburgh (I see that you're from > CMU) :) > > Perry > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From prmarino1 at gmail.com Fri Jan 25 19:41:01 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 25 Jan 2013 14:41:01 -0500 Subject: [rhos-list] Capacity Planning In-Reply-To: <5102B78D.60908@redhat.com> References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> <5102B78D.60908@redhat.com> Message-ID: well also keep in mind on most of the large Ubuntu/ Debian environments they are using RabbitMQ and RHOS is using QPID for its messaging server as such you cant expect the performance to be the same. additionally the type of hardware you are using makes a difference. many Cloud hosting providers use essentially desktop class hardware because you can buy a ton of them in bulk very cheaply. in most cases that philosophy is penny wise and pound foolish when you do the math but that seems to be whats popular now. If you are running on actual server class hardware you may find the performance is significantly better due to the fact that server class hardware have fewer bus bottlenecks than desktop class hardware. that being said if you put the messaging server on a VIP you can move it where ever you want latter. that being said if you are planning to run thousands of VMs its probably a good idea to consider giving each component their own box or pair of boxes for HA failover. On Fri, Jan 25, 2013 at 11:49 AM, Russell Bryant wrote: > On 01/25/2013 10:59 AM, Ted Brunell wrote: >> Are there guides available to assist in the capacity planning for an OpenStack installation? I'm looking for something that will say what the best breakout for the nova-services is based on either the number of nova-compute instances or the number of VM instances running in OpenStack. For example, is there an indicator of when it is best to run the messaging server on its own physical server? Is that indicator typically met when there are 10 compute-nodes, or 1000 VM instances? > > I'm not aware of any guides in this area. Perhaps others on the list > with larger deployments can speak up. > > It's a hard question to answer because I don't think there's going to be > a hard rule on it. For example, the way the deployment is used will > play a big part in it. A deployment with VMs that have a much longer > lifetime will have a lower load on the supporting services than one that > is rapidly spinning up and tearing down VMs all the time. > > I tend to see more discussion about splitting services out and running > more of them for the purposes of high availability in larger deployments > before doing so because of the load being too high. > > I guess I'm saying that there's a good chance you would introduce > additional capacity for HA before you need it because of load issues. > > -- > Russell Bryant > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From nux at li.nux.ro Fri Jan 25 19:59:29 2013 From: nux at li.nux.ro (Nux!) Date: Fri, 25 Jan 2013 19:59:29 +0000 Subject: [rhos-list] Capacity Planning In-Reply-To: References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> <5102B78D.60908@redhat.com> Message-ID: <9fab5b3ed16b0ebcca1063f0c38adc34@li.nux.ro> On 25.01.2013 19:41, Paul Robert Marino wrote: > well also keep in mind on most of the large Ubuntu/ Debian > environments they are using RabbitMQ and RHOS is using QPID for its > messaging server as such you cant expect the performance to be the > same. Hi, I'm not familiar with these messaging programs. Why would Qpid worse than rabbitmq? -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro From prmarino1 at gmail.com Fri Jan 25 20:03:20 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 25 Jan 2013 15:03:20 -0500 Subject: [rhos-list] Capacity Planning In-Reply-To: <9fab5b3ed16b0ebcca1063f0c38adc34@li.nux.ro> References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> <5102B78D.60908@redhat.com> <9fab5b3ed16b0ebcca1063f0c38adc34@li.nux.ro> Message-ID: I'm not saying its necessarily worse or better. I'm just saying I wouldn't expect them to perform the same way under higher loads because they are two different programs. On Fri, Jan 25, 2013 at 2:59 PM, Nux! wrote: > On 25.01.2013 19:41, Paul Robert Marino wrote: >> >> well also keep in mind on most of the large Ubuntu/ Debian >> environments they are using RabbitMQ and RHOS is using QPID for its >> messaging server as such you cant expect the performance to be the >> same. > > > Hi, > > I'm not familiar with these messaging programs. Why would Qpid worse than > rabbitmq? > > -- > Sent from the Delta quadrant using Borg technology! > > Nux! > www.nux.ro > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From dhkarimi at sei.cmu.edu Fri Jan 25 20:05:58 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Fri, 25 Jan 2013 20:05:58 +0000 Subject: [rhos-list] Capacity Planning In-Reply-To: References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> <5102B78D.60908@redhat.com> Message-ID: <421EE192CD0C6C49A23B97027914202B03B04F@marathon> We have a smallish cluster of server class hardware. We are using it for experimental and educational purposes. I had not head of Spacewalk before, and have really just begun using the RHN subscription channel. I am not aware of all the features and benefits, and am not familiar with the advanced configuration management it enables. It looks like I need to start with Todd's suggestion and request the preview from existing satellite subscription. I am not sure if I can do that, or if I will have to contact our IT department who manage the satellite server. Also, I see that just being connected to the existing Satelite Server channels I have access to OpenStack folsom packges. Regretfully, my current task requires me to install Essex. Why is there a special preview program for Essex and also it is available on my existing channels? I was trying to get yum to show me if there were older Essex vintage packages available through those normal subscription channels but I couldn't get that to work yet. Thank you for bearing with me as a RHN newbie. --Derrick H. Karimi --Software Developer, SEI Innovation Center --Carnegie Mellon University -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Paul Robert Marino Sent: Friday, January 25, 2013 2:41 PM To: rhos-list at redhat.com Subject: Re: [rhos-list] Capacity Planning well also keep in mind on most of the large Ubuntu/ Debian environments they are using RabbitMQ and RHOS is using QPID for its messaging server as such you cant expect the performance to be the same. additionally the type of hardware you are using makes a difference. many Cloud hosting providers use essentially desktop class hardware because you can buy a ton of them in bulk very cheaply. in most cases that philosophy is penny wise and pound foolish when you do the math but that seems to be whats popular now. If you are running on actual server class hardware you may find the performance is significantly better due to the fact that server class hardware have fewer bus bottlenecks than desktop class hardware. that being said if you put the messaging server on a VIP you can move it where ever you want latter. that being said if you are planning to run thousands of VMs its probably a good idea to consider giving each component their own box or pair of boxes for HA failover. On Fri, Jan 25, 2013 at 11:49 AM, Russell Bryant wrote: > On 01/25/2013 10:59 AM, Ted Brunell wrote: >> Are there guides available to assist in the capacity planning for an OpenStack installation? I'm looking for something that will say what the best breakout for the nova-services is based on either the number of nova-compute instances or the number of VM instances running in OpenStack. For example, is there an indicator of when it is best to run the messaging server on its own physical server? Is that indicator typically met when there are 10 compute-nodes, or 1000 VM instances? > > I'm not aware of any guides in this area. Perhaps others on the list > with larger deployments can speak up. > > It's a hard question to answer because I don't think there's going to > be a hard rule on it. For example, the way the deployment is used > will play a big part in it. A deployment with VMs that have a much > longer lifetime will have a lower load on the supporting services than > one that is rapidly spinning up and tearing down VMs all the time. > > I tend to see more discussion about splitting services out and running > more of them for the purposes of high availability in larger > deployments before doing so because of the load being too high. > > I guess I'm saying that there's a good chance you would introduce > additional capacity for HA before you need it because of load issues. > > -- > Russell Bryant > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From prmarino1 at gmail.com Fri Jan 25 20:43:54 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Fri, 25 Jan 2013 15:43:54 -0500 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> Message-ID: Derrick you seem to have crossed the message streams here so im going to respond in the original chain Well if you have Satellite already absolutely by all means use it! Spacewalk is cool for managing other distros like Fedora, Cent OS, Suse, Scientific Linux but if you have a choice for RHEL use Satellite. Also if you are using it for educational purposes in a university, continuing education, or internal training environment, Spacewalk might also be useful to teach your students how to configure and use it because the skills would directly apply to administrating future versions of Satellite and SUSE Manager (Novels Satellite Clone built on Spacewalk). The Essex packages are a different channel than the Folsom packages. Unfortunately I'm not sure if that channel is still available which may explain why you don't see them. On Fri, Jan 25, 2013 at 2:25 PM, Paul Robert Marino wrote: > Derrick > > Satellite can do it easily be used to deploy it and manage the > configuration files, but be warned Satellite still doesn't have full > integration with OpenStack yet nor does its upstream project > Spacewalk. so you wont be able to effectively use it to manage the > OpenStack cluster itself. what that means is you don't want to use the > Satellite built in VM provisioning because it will create them without > OpenStacks knowledge, also there is no Glance, or Cinder integration > in Satellite yet either. I have been testing with spacewalk and would > gladly share the information Ive put together so far on the subject. > > Ive work at large companies that used Satellite in the past and am > currently using Spacewalk and have been actively involved with the > Spacewalk community. Based on my experience I can tell you this both > are great pieces of software and I would go so far to say they are > really required for any enterprise environment with more than 100 > servers. That being said Spacewalk cant directly sync from RHN > although people have done it effectively through middle ware its not > officially supported, so if you aren't willing to put the effort in to > get bleeding edge software to work or your business requirements force > you to get support Satellite is the right way to go. > > if you don't want to pay for Satellite or go through the effort of > setting up a spacewalk server you have 3 other options. > > mrepo is just a simple repository mirror that can mirror RHN channels > > Pulp can also mirror RHN and can do kickstarts but is more complicated > than mrepo > > Finally there is the option of using SAM should be able to act as a > RHN as well effective proxy > https://access.redhat.com/knowledge/docs/Red_Hat_Subscription_Asset_Manager/ > . I be leave its already included with the standard RHEL support > license. Note I'm not sure about SAM's capabilities with OpenStack, > also I wouldn't suggest it for environment that are mission critical > or require change control because it wasn't really designed for it the > way Satellite was. > > On Fri, Jan 25, 2013 at 1:27 PM, Perry Myers wrote: >> On 01/25/2013 01:19 PM, Derrick H. Karimi wrote: >>> Hi, >>> >>> My organization has a satellite RHN server. Our RHEL 6.2 cluster >>> does not have direct internet access. How can I get the OpenStack essex >>> preview to work in this environment? >> >> Hmm... I would assume it's possible to have a satellite server mirror >> from an eval channel and that should solve your issue. But I'm not an >> expert on RHN/Satellite so I've cc'd a few folks who are >> >> Cliff/Todd, any thoughts? >> >> As an aside, I happen to be based in Pittsburgh (I see that you're from >> CMU) :) >> >> Perry >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list From pmyers at redhat.com Fri Jan 25 20:57:29 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 25 Jan 2013 15:57:29 -0500 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> Message-ID: <5102F1B9.30705@redhat.com> On 01/25/2013 03:43 PM, Paul Robert Marino wrote: > Derrick > you seem to have crossed the message streams here so im going to > respond in the original chain > > Well if you have Satellite already absolutely by all means use it! > Spacewalk is cool for managing other distros like Fedora, Cent OS, > Suse, Scientific Linux but if you have a choice for RHEL use > Satellite. > Also if you are using it for educational purposes in a university, > continuing education, or internal training environment, Spacewalk > might also be useful to teach your students how to configure and use > it because the skills would directly apply to administrating future > versions of Satellite and SUSE Manager (Novels Satellite Clone built > on Spacewalk). > > > The Essex packages are a different channel than the Folsom packages. > Unfortunately I'm not sure if that channel is still available which > may explain why you don't see them. The RHOS Preview still should provide access to both the Essex and Folsom channels. The Essex channel will be deprecated later this year, but definitely is still around. That being said... Folsom is so much better that Essex... I'm curious if you have a specific need to stay on Essex vs. moving to the newer release? Cheers, Perry From dhkarimi at sei.cmu.edu Fri Jan 25 21:04:14 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Fri, 25 Jan 2013 21:04:14 +0000 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <5102F1B9.30705@redhat.com> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> Message-ID: <421EE192CD0C6C49A23B97027914202B03B17C@marathon> On 01/25/2013 03:57 PM, Perry Myers wrote: > On 01/25/2013 03:43 PM, Paul Robert Marino wrote: >> Derrick >> you seem to have crossed the message streams here so im going to >> respond in the original chain >> >> Well if you have Satellite already absolutely by all means use it! >> Spacewalk is cool for managing other distros like Fedora, Cent OS, >> Suse, Scientific Linux but if you have a choice for RHEL use >> Satellite. >> Also if you are using it for educational purposes in a university, >> continuing education, or internal training environment, Spacewalk >> might also be useful to teach your students how to configure and use >> it because the skills would directly apply to administrating future >> versions of Satellite and SUSE Manager (Novels Satellite Clone built >> on Spacewalk). >> >> >> The Essex packages are a different channel than the Folsom packages. >> Unfortunately I'm not sure if that channel is still available which >> may explain why you don't see them. > The RHOS Preview still should provide access to both the Essex and > Folsom channels. Ok I am trying to figure out how to get the Preview into the Satellite server. Or if the Satellite server for sure does have Essex on it. > > The Essex channel will be deprecated later this year, but definitely is > still around. > > That being said... Folsom is so much better that Essex... I'm curious > if you have a specific need to stay on Essex vs. moving to the newer > release? It is a compatibility issue. We need to test 3rd party's add-on functionality to OpenStack, and 3rd party's requirements were to do it for Essex. > > Cheers, > > Perry From pmyers at redhat.com Fri Jan 25 20:33:24 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 25 Jan 2013 15:33:24 -0500 Subject: [rhos-list] Capacity Planning In-Reply-To: References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> <5102B78D.60908@redhat.com> <9fab5b3ed16b0ebcca1063f0c38adc34@li.nux.ro> Message-ID: <5102EC14.9000606@redhat.com> On 01/25/2013 03:03 PM, Paul Robert Marino wrote: > I'm not saying its necessarily worse or better. I'm just saying I > wouldn't expect them to perform the same way under higher loads > because they are two different programs. I can't speak to RabbitMQ, but for qpid, we have good testing of it under very high load. qpid is the core of the MRG-M product, and it's used for very throughput and latency sensitive workloads. Ted Ross (cc'd) can provide more details on qpid performance characteristics vs. RabbitMQ probably :) Perry > > On Fri, Jan 25, 2013 at 2:59 PM, Nux! wrote: >> On 25.01.2013 19:41, Paul Robert Marino wrote: >>> >>> well also keep in mind on most of the large Ubuntu/ Debian >>> environments they are using RabbitMQ and RHOS is using QPID for its >>> messaging server as such you cant expect the performance to be the >>> same. >> >> >> Hi, >> >> I'm not familiar with these messaging programs. Why would Qpid worse than >> rabbitmq? >> >> -- >> Sent from the Delta quadrant using Borg technology! >> >> Nux! >> www.nux.ro >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From dhkarimi at sei.cmu.edu Fri Jan 25 22:32:03 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Fri, 25 Jan 2013 22:32:03 +0000 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <421EE192CD0C6C49A23B97027914202B03B17C@marathon> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> Message-ID: <421EE192CD0C6C49A23B97027914202B03B233@marathon> On 01/25/2013 04:04 PM, Derrick H. Karimi wrote: > On 01/25/2013 03:57 PM, Perry Myers wrote: >> On 01/25/2013 03:43 PM, Paul Robert Marino wrote: >>> Derrick >>> you seem to have crossed the message streams here so im going to >>> respond in the original chain >>> >>> Well if you have Satellite already absolutely by all means use it! >>> Spacewalk is cool for managing other distros like Fedora, Cent OS, >>> Suse, Scientific Linux but if you have a choice for RHEL use >>> Satellite. >>> Also if you are using it for educational purposes in a university, >>> continuing education, or internal training environment, Spacewalk >>> might also be useful to teach your students how to configure and use >>> it because the skills would directly apply to administrating future >>> versions of Satellite and SUSE Manager (Novels Satellite Clone built >>> on Spacewalk). >>> >>> >>> The Essex packages are a different channel than the Folsom packages. >>> Unfortunately I'm not sure if that channel is still available which >>> may explain why you don't see them. >> The RHOS Preview still should provide access to both the Essex and >> Folsom channels. > Ok I am trying to figure out how to get the Preview into the Satellite > server. Or if the Satellite server for sure does have Essex on it. yum --showduplicates openstack-nova-common* reveals that 2012.1.3-1 should be available. I am gonna try to force yum to install with that somehow. >> The Essex channel will be deprecated later this year, but definitely is >> still around. >> >> That being said... Folsom is so much better that Essex... I'm curious >> if you have a specific need to stay on Essex vs. moving to the newer >> release? > It is a compatibility issue. We need to test 3rd party's add-on > functionality to OpenStack, and 3rd party's requirements were to do it > for Essex. >> Cheers, >> >> Perry > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From fpercoco at redhat.com Fri Jan 25 22:33:35 2013 From: fpercoco at redhat.com (Flavio Percoco) Date: Fri, 25 Jan 2013 23:33:35 +0100 Subject: [rhos-list] Capacity Planning In-Reply-To: <5102B78D.60908@redhat.com> References: <165046265.6491928.1359129549476.JavaMail.root@redhat.com> <5102B78D.60908@redhat.com> Message-ID: <5103083F.3060501@redhat.com> On 01/25/2013 05:49 PM, Russell Bryant wrote: > On 01/25/2013 10:59 AM, Ted Brunell wrote: >> Are there guides available to assist in the capacity planning for an OpenStack installation? I'm looking for something that will say what the best breakout for the nova-services is based on either the number of nova-compute instances or the number of VM instances running in OpenStack. For example, is there an indicator of when it is best to run the messaging server on its own physical server? Is that indicator typically met when there are 10 compute-nodes, or 1000 VM instances? > Along the lines of what Russell suggested, I'd say that starting with HA planning is somehow a good thing when it comes to define your architecture and how it'll handle heavy loads since most of the time what matters is how available it is (not saying it is all about availability). Another thing would be following some rules that could help on this process like, "Don't put together services that will use the same resource because most likely they would face some performance issues". I do think this is something strictly related to what the purpose of such deployment is and it would be used. I guess what I mean is that it definitely varies on each case but there could be some basic rules that would help in the process of planning. > I'm not aware of any guides in this area. Perhaps others on the list > with larger deployments can speak up. > > It's a hard question to answer because I don't think there's going to be > a hard rule on it. For example, the way the deployment is used will > play a big part in it. A deployment with VMs that have a much longer > lifetime will have a lower load on the supporting services than one that > is rapidly spinning up and tearing down VMs all the time. > > I tend to see more discussion about splitting services out and running > more of them for the purposes of high availability in larger deployments > before doing so because of the load being too high. > > I guess I'm saying that there's a good chance you would introduce > additional capacity for HA before you need it because of load issues. > -- Flavio From rkukura at redhat.com Fri Jan 25 22:49:06 2013 From: rkukura at redhat.com (Robert Kukura) Date: Fri, 25 Jan 2013 17:49:06 -0500 Subject: [rhos-list] Bridge names in Openstack quantum In-Reply-To: <1358425455.9219adc5c42107c4911e249155320648@mail.in.com> References: <1358425455.9219adc5c42107c4911e249155320648@mail.in.com> Message-ID: <51030BE2.80504@redhat.com> On 01/17/2013 07:24 AM, Kumar Vaibhav wrote: > Hi, > > I am having problem in using the Quantum in on a single flat network. > > I am using it in Linux bridge mode. > My compute node is having only one network. On which I pre created a > bridge br10. > > When I instantiated a machine I compute node become inaccesible. I > checked the logs and found that it has changed my bridge configuration. > > It has created a new bridge. with name of br<12 characters of Network > ID> This seems to be little cryptic in identification. > > Is there any way I can define the existing Bridge name to be used for a > particular Network ID? > Currently Quantum checks if the Bridge exists it just creates a TUN device. > So If I pre create the Bridge then it will not change it > > Regards > Vaibhav Hi, The linuxbridge agent's current behavior is to create and manage the bridges itself, based on physical interface names obtained from the physical_interface_mappings configuration variable. If possible, you should let the agent take care of creating the bridge. It will move any IP address the host has on the interface to the bridge, so connectivity should not be lost. But I do agree there are cases where it would make sense for the agent to use a pre-configured bridge instead. Therefore I've filed https://bugs.launchpad.net/quantum/+bug/1105488 upstream and https://bugzilla.redhat.com/show_bug.cgi?id=904274 against the RHOS product requesting this enhancement. I was not able to add your email address to the the BZ because it is not registered, so you may want to do this yourself. One option to consider would be using the openvswitch plugin, which does use pre-configured OVS bridges. -Bob > > > Dear *rhos-list !* Get Yourself a cool, short *@in.com* Email ID now! > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From dhkarimi at sei.cmu.edu Mon Jan 28 12:47:42 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Mon, 28 Jan 2013 12:47:42 +0000 Subject: [rhos-list] Installing Openstack Essex on isolated RHEL 6.2 cluster In-Reply-To: <421EE192CD0C6C49A23B97027914202B03B233@marathon> References: <421EE192CD0C6C49A23B97027914202B03AF0C@marathon> <5102CE75.7080102@redhat.com> <5102F1B9.30705@redhat.com> <421EE192CD0C6C49A23B97027914202B03B17C@marathon> <421EE192CD0C6C49A23B97027914202B03B233@marathon> Message-ID: <421EE192CD0C6C49A23B97027914202B03E13B@marchand> On 01/25/2013 05:32 PM, Derrick H. Karimi wrote: > On 01/25/2013 04:04 PM, Derrick H. Karimi wrote: >> On 01/25/2013 03:57 PM, Perry Myers wrote: >>> On 01/25/2013 03:43 PM, Paul Robert Marino wrote: >>> >>> >>> The Essex packages are a different channel than the Folsom packages. >>> Unfortunately I'm not sure if that channel is still available which >>> may explain why you don't see them. >>> The RHOS Preview still should provide access to both the Essex and >>> Folsom channels. >> Ok I am trying to figure out how to get the Preview into the Satellite >> server. Or if the Satellite server for sure does have Essex on it. > yum --showduplicates openstack-nova-common* reveals that 2012.1.3-1 > should be available. I am gonna try to force yum to install with that > somehow. I got one of my team members, Kodiak, involved. He maintains the satellite server, and was able to figure out , among other things, that we were seeing Openstack packages from our mirrored EPEL. We are not sure if we still need the "preview" but he signed up for it anyway. I am able to make you install essex packages by specifying complete package names. I went to the projects websites and tried to determine which version matched the state of release essex. yum install openstack-nova-2012.1.3-1.el6.noarch openstack-glance-2012.1-5.el6.noarch openstack-keystone-2012.1.3-1.el6.noarch openstack-swift-1.4.8-1.el6.noarch Some wierd stuff started happening with dependencies of keystone 2012.1.3-1. Once yum processed it it told me it wanted to install the 2012.2 version of the same library!. I think I finally got around this by grabbing rpm's directly from our satellite's rpm search page on the web interface. And I do: rpm -i python-keystone-auth-token-2012.1.3-1.el6.noarch.rpm rpm -i python-keystone-2012.1.3-1.el6.noarch.rpm And then I yum in the essex versions of other openstack projects. In the end I have Openstack up, and am trying to configure it now. I can launch instances, but for some reason I can delete them. --Derrick