From lists at alteeve.ca Sat Feb 1 01:05:26 2014 From: lists at alteeve.ca (Digimer) Date: Fri, 31 Jan 2014 20:05:26 -0500 Subject: [rhelv6-list] Highly available OpenLDAP In-Reply-To: <6F56410FBED1FC41BCA804E16F594B0B33054720@chvpkw8xmbx05.chvpk.chevrontexaco.net> References: <6F56410FBED1FC41BCA804E16F594B0B33054720@chvpkw8xmbx05.chvpk.chevrontexaco.net> Message-ID: <52EC4856.7030302@alteeve.ca> On 31/01/14 06:14 PM, Collins, Kevin [Contractor Acquisition Program] wrote: > Hi all, > > I?m looking for a little input on what other folks are > doing to solve a problem we are trying to address. The scenario is as > follows: > > We were an NIS shop for many, many years. Our environment was (and still > is) heavily dependant on NIS, and netgroups in particular, to function > correctly. > > About 5 or 6 years ago we migrated from NIS to LDAP (using RFC2307 to > provide NIS maps via LDAP). The environment at the time consisted of > less than 200 servers (150 in primary site, the rest in a secondary > site), mostly HP-UX with Linux playing the part of ?utility? services > (LDAP, DNS, mysql, httpd, VNC). > > We use LDAP only to provide the standard NIS ?maps? (with a few small > custom maps, too). > > We maintain our our LDAP servers with the RHEL-provided OpenLDAP, with a > single master in our primary site in conjunction with 2 replica servers > in our primary site and 2 replica servers in our secondary site. > Replication was using the slurpd mechanism (we started on RHEL3). > > Life was good J > > Fast forward to current environment, and a merger with a different Unix > team (and migrating that environment from NIS to LDAP as well). We now > have close to 1000 servers (mix of physical and VM): roughly 400 each > for our 2 primary sites and the rest scattered across another 3 sites. > The mix is now much more heavily Linux (70%), which the remaining 30% > split between HP-UX and Solaris. > > We have increased the number of replicas adding 2 more replicas in each > of the new sites. > > We are still (mostly) using slurpd for replication, although with the > impending migration of our LDAP master from RHEL5 to RHEL6, we must > change to using sync-repl. No problem, as this is (IMO) a much better > replication method and relieves the worries and headaches that occur > when a replica for some reason becomes ?broken? for some period of time. > We have already started this migration, and our master now handles both > slurpd (to old replicas) and sync-repl (from new replicas). > > In our environment, each site has is configured to point to LDAP > services by IP address. Two IP addresses per site which are > ?load-balanced? by alternating which IP is first and second in the > config files based on whether the last octet of the client IP address is > even or odd. This is done as very basic way to distribute the load. > > Now comes the crux of the problem: what happens when an LDAP server > becomes unavailable for some reason? > > If the client is HP-UX (ldapclientd), Solaris (ldap_cachemgr) or RHEL6 > (nslcd) there is not much of an issue as long as 1 LDAP replica in each > site is functioning. The specific LDAP-daemon for each platform will > have a small hiccup while it times out and falls over to the next LDAP > replica? a few seconds, not a big deal. > > If, however, the client is RHEL4 (yes, still!) or RHEL5 then the problem > is much bigger! On these versions, each process that needs to use LDAP > must go thru the exact same timeout process ? the systems become very > bogged down, or even unusable depending on the server load. > > In one subset of our larger environment (about 40%), we run nscd which > can help alleviate some of this issue but not all of it. We are planning > to enable nscd on the remainder very soon ? the historical reasoning for > why those servers do not use nscd is unknown. > > Last year, I started investigating and testing the use of LVS (Linux > Virtual Server) to provide a highly available (aka, clustered), > load-balanced front-end that would direct client requests for a single > IP address (per site) to the backend LDAP servers. Results were very > good, and I proposed this plan to our management. > > DENIED! > > It was deemed to be ?too complex to manage? by our team, and redundant > to the BigIP F5 service offering with the company. I tend to favor > self-management of infrastructure components which are critical to > maintaining system functionality, but what do I know? J > > So, we are now looking down the route of using F5 (managed by another > team) to front-ent our LDAP > > But, another option has been proposed: what if we make each linux server > an LDAP replica that keeps itself up to date with sync-repl and have > each server use only itself for LDAP services? The setup of this would > be fairly straightforward, and could be easily integrated into our build > process. > > Since we don?t make massive volumes of changes, I feel like the network > load for LDAP would probably drop significantly, and we don?t have to > worry about many of these other issues. I know that this solves the > problem only for Linux, but Solaris and HP-UX already handle the problem > case are are being phased out of our environment. > > Anyway, thanks for reading this novel ? had not intended to write so > much, but wanted to set the foundation for my question. > > What are you people doing to solve this problem? Are you using F5? Do > you think the ?every server a replica? approach makes sense? > > I am posting to both RHEL5 and RHEL6 lists, sorry if you see it twice. > > Thanks in advance for your input. > > Kevin Hi Kevin, Full disclosure; I am recommending something I helped design, so I am biased. :) We've created a (totally open source) HA platform based on RHEL 6 for KVM VMs running in a two-node HA setup. The reason I am mentioning this is because I think it might directly address your manager's concern about complexity. We've build a web front-end for managing the HA side designed to be used by non-IT people. This said, it's a *pure* HA solution, no load balancing, which might make it ineligible for you. Here are the build instructions: https://alteeve.ca/w/AN!Cluster_Tutorial_2 That is designed for people who want to build things from the ground up, so the WebUI isn't highlighted very strongly. For that, you can get a better idea of the (still being written manual) here: https://alteeve.ca/w/AN!CDB Understanding this might be coming off as spamming, I want to underline that it's all open code and design. The platform itself has been field tested for years in mission-critical (mostly manufacturing, some scientific/imaging) environments. One more note; The design was built around not touching the guests at all. Beyond the (optional) virtio block and net drivers, the guests are effectively oblivious to the HA underneath them. This is nothing special to our project, of course. This is a general benefit of KVM. We've also tested this with Solaris 11, FreeBSD (which I don't think you use) and several flavours of linux and windows. The only thing not tested are true Unixes, though I suspect you're not concerned about that here. Hope this helps offer an option for your predicament. :) -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From KCollins at chevron.com Mon Feb 3 16:59:59 2014 From: KCollins at chevron.com (Collins, Kevin [Contractor Acquisition Program]) Date: Mon, 3 Feb 2014 16:59:59 +0000 Subject: [rhelv6-list] [rhelv5-list] Highly available OpenLDAP In-Reply-To: References: <6F56410FBED1FC41BCA804E16F594B0B33054720@chvpkw8xmbx05.chvpk.chevrontexaco.net> Message-ID: <6F56410FBED1FC41BCA804E16F594B0B33057C53@chvpkw8xmbx05.chvpk.chevrontexaco.net> Thanks for the response. As I mentioned, we started into LDAP quite a while back (very much pre-sssd), so when it came time to implement in RHEL6 we decided to do things in such a way as to keep support/training minimal. Using nslcd was very similar to the existing method available before sssd. The daemon-based setup was also very similar to the HP-UX and Solaris solutions. Until all of our systems are Linux we will probably not put much effort into re-working the core of what we are doing - too many bigger fish to fry, and not enough time or people. Other than the occassional LDAP replica server outage (very, very few over our entire history) the system we have just works... Thanks for not (really) bringing up RHDS/389... ;) Kevin -----Original Message----- From: rhelv5-list-bounces at redhat.com [mailto:rhelv5-list-bounces at redhat.com] On Behalf Of Bryan J Smith Sent: Friday, January 31, 2014 3:32 PM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Cc: rhelv5-list at redhat.com Subject: Re: [rhelv5-list] [rhelv6-list] Highly available OpenLDAP To start ... Why nscd and nslcd? Haven't you considered moving to sssd on your RHEL6 and RHEL5.8+ systems? So many benefits ... so, so many, client and server ... especially caching, which is far more deterministic and far more stable too! It's just putting in that small amount of time to figure out the details for sssd. E.g., a common one I run into is lax security, allowed by prior nss, pam, etc... modules, but not sssd by default (by can be disabled). I know that don't solve your non-RHEL/Fedora client load, but just FYI. -- bjs P.S. I'm purposely not mentioning RHDS (389), but there's that age-old argument, especially if this is already costing your organization money in time. ;) Plus there is now RHEL6 built-in IdM (IPA) services. I.e., if you're just serving what is, essentially, converted NIS maps ... even "free" IdM might be all you need. It can service egacy LDAP client as well, although I don't know your environment and all of your schema. -- Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith ----------------------------------------------------------------- "In a way, Bortles is the personification of the UCF football program. Each has many of the elements that everyone claims to want, and yet they are nobody's first choice. Coming out of high school, Bortles had the size and the arm to play at a more prestigious program. UCF likewise has the market size and the talent base to play in a more prestigious conference than the American Athletic. But timing and circumstances conspired to put both where they are now." -- Andy Staples, CNN-Sports Illustrated On Fri, Jan 31, 2014 at 6:14 PM, Collins, Kevin [Contractor Acquisition Program] wrote: > Hi all, > > > > I'm looking for a little input on what other folks are doing to > solve a problem we are trying to address. The scenario is as follows: > > > > We were an NIS shop for many, many years. Our environment was (and still is) > heavily dependant on NIS, and netgroups in particular, to function > correctly. > > > > About 5 or 6 years ago we migrated from NIS to LDAP (using RFC2307 to > provide NIS maps via LDAP). The environment at the time consisted of less > than 200 servers (150 in primary site, the rest in a secondary site), mostly > HP-UX with Linux playing the part of "utility" services (LDAP, DNS, mysql, > httpd, VNC). > > > > We use LDAP only to provide the standard NIS "maps" (with a few small custom > maps, too). > > > > We maintain our our LDAP servers with the RHEL-provided OpenLDAP, with a > single master in our primary site in conjunction with 2 replica servers in > our primary site and 2 replica servers in our secondary site. Replication > was using the slurpd mechanism (we started on RHEL3). > > > > Life was good J > > > > Fast forward to current environment, and a merger with a different Unix team > (and migrating that environment from NIS to LDAP as well). We now have close > to 1000 servers (mix of physical and VM): roughly 400 each for our 2 primary > sites and the rest scattered across another 3 sites. The mix is now much > more heavily Linux (70%), which the remaining 30% split between HP-UX and > Solaris. > > > > We have increased the number of replicas adding 2 more replicas in each of > the new sites. > > > > We are still (mostly) using slurpd for replication, although with the > impending migration of our LDAP master from RHEL5 to RHEL6, we must change > to using sync-repl. No problem, as this is (IMO) a much better replication > method and relieves the worries and headaches that occur when a replica for > some reason becomes "broken" for some period of time. We have already > started this migration, and our master now handles both slurpd (to old > replicas) and sync-repl (from new replicas). > > > > In our environment, each site has is configured to point to LDAP services by > IP address. Two IP addresses per site which are "load-balanced" by > alternating which IP is first and second in the config files based on > whether the last octet of the client IP address is even or odd. This is done > as very basic way to distribute the load. > > > > Now comes the crux of the problem: what happens when an LDAP server becomes > unavailable for some reason? > > > > If the client is HP-UX (ldapclientd), Solaris (ldap_cachemgr) or RHEL6 > (nslcd) there is not much of an issue as long as 1 LDAP replica in each site > is functioning. The specific LDAP-daemon for each platform will have a small > hiccup while it times out and falls over to the next LDAP replica... a few > seconds, not a big deal. > > > > If, however, the client is RHEL4 (yes, still!) or RHEL5 then the problem is > much bigger! On these versions, each process that needs to use LDAP must go > thru the exact same timeout process - the systems become very bogged down, > or even unusable depending on the server load. > > > > In one subset of our larger environment (about 40%), we run nscd which can > help alleviate some of this issue but not all of it. We are planning to > enable nscd on the remainder very soon - the historical reasoning for why > those servers do not use nscd is unknown. > > > > Last year, I started investigating and testing the use of LVS (Linux Virtual > Server) to provide a highly available (aka, clustered), load-balanced > front-end that would direct client requests for a single IP address (per > site) to the backend LDAP servers. Results were very good, and I proposed > this plan to our management. > > > > DENIED! > > > > It was deemed to be "too complex to manage" by our team, and redundant to > the BigIP F5 service offering with the company. I tend to favor > self-management of infrastructure components which are critical to > maintaining system functionality, but what do I know? J > > > > So, we are now looking down the route of using F5 (managed by another team) > to front-ent our LDAP > > > > But, another option has been proposed: what if we make each linux server an > LDAP replica that keeps itself up to date with sync-repl and have each > server use only itself for LDAP services? The setup of this would be fairly > straightforward, and could be easily integrated into our build process. > > > > Since we don't make massive volumes of changes, I feel like the network load > for LDAP would probably drop significantly, and we don't have to worry about > many of these other issues. I know that this solves the problem only for > Linux, but Solaris and HP-UX already handle the problem case are are being > phased out of our environment. > > > > Anyway, thanks for reading this novel - had not intended to write so much, > but wanted to set the foundation for my question. > > > > What are you people doing to solve this problem? Are you using F5? Do you > think the "every server a replica" approach makes sense? > > > > I am posting to both RHEL5 and RHEL6 lists, sorry if you see it twice. > > > > Thanks in advance for your input. > > > > Kevin > > > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list _______________________________________________ rhelv5-list mailing list rhelv5-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv5-list From lists at alteeve.ca Mon Feb 3 17:11:32 2014 From: lists at alteeve.ca (Digimer) Date: Mon, 03 Feb 2014 12:11:32 -0500 Subject: [rhelv6-list] Highly available OpenLDAP In-Reply-To: <6F56410FBED1FC41BCA804E16F594B0B33057C85@chvpkw8xmbx05.chvpk.chevrontexaco.net> References: <6F56410FBED1FC41BCA804E16F594B0B33054720@chvpkw8xmbx05.chvpk.chevrontexaco.net> <52EC4856.7030302@alteeve.ca> <6F56410FBED1FC41BCA804E16F594B0B33057C85@chvpkw8xmbx05.chvpk.chevrontexaco.net> Message-ID: <52EFCDC4.4040501@alteeve.ca> hehe, totally understand. Hard to change tact in a large org. That said, licensing (or the lack there-of) can sometimes be a strong wind. ;) Cheers! digimer On 03/02/14 12:10 PM, Collins, Kevin [Contractor Acquisition Program] wrote: > Thanks - I'll take a look at this when I get a bit of time, but unless "KVM" translates directly to "VMWare" it won't happen in our environment, unfortunately. The current "answer" to VMs in our environment is "VMWare", and we don't really have any say beyond that. > > Kevin > -----Original Message----- > From: Digimer [mailto:lists at alteeve.ca] > Sent: Friday, January 31, 2014 5:05 PM > To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list; rhelv5-list at redhat.com > Cc: Collins, Kevin [Contractor Acquisition Program] > Subject: Re: [rhelv6-list] Highly available OpenLDAP > > On 31/01/14 06:14 PM, Collins, Kevin [Contractor Acquisition Program] wrote: >> Hi all, >> >> I'm looking for a little input on what other folks are >> doing to solve a problem we are trying to address. The scenario is as >> follows: >> >> We were an NIS shop for many, many years. Our environment was (and still >> is) heavily dependant on NIS, and netgroups in particular, to function >> correctly. >> >> About 5 or 6 years ago we migrated from NIS to LDAP (using RFC2307 to >> provide NIS maps via LDAP). The environment at the time consisted of >> less than 200 servers (150 in primary site, the rest in a secondary >> site), mostly HP-UX with Linux playing the part of "utility" services >> (LDAP, DNS, mysql, httpd, VNC). >> >> We use LDAP only to provide the standard NIS "maps" (with a few small >> custom maps, too). >> >> We maintain our our LDAP servers with the RHEL-provided OpenLDAP, with a >> single master in our primary site in conjunction with 2 replica servers >> in our primary site and 2 replica servers in our secondary site. >> Replication was using the slurpd mechanism (we started on RHEL3). >> >> Life was good J >> >> Fast forward to current environment, and a merger with a different Unix >> team (and migrating that environment from NIS to LDAP as well). We now >> have close to 1000 servers (mix of physical and VM): roughly 400 each >> for our 2 primary sites and the rest scattered across another 3 sites. >> The mix is now much more heavily Linux (70%), which the remaining 30% >> split between HP-UX and Solaris. >> >> We have increased the number of replicas adding 2 more replicas in each >> of the new sites. >> >> We are still (mostly) using slurpd for replication, although with the >> impending migration of our LDAP master from RHEL5 to RHEL6, we must >> change to using sync-repl. No problem, as this is (IMO) a much better >> replication method and relieves the worries and headaches that occur >> when a replica for some reason becomes "broken" for some period of time. >> We have already started this migration, and our master now handles both >> slurpd (to old replicas) and sync-repl (from new replicas). >> >> In our environment, each site has is configured to point to LDAP >> services by IP address. Two IP addresses per site which are >> "load-balanced" by alternating which IP is first and second in the >> config files based on whether the last octet of the client IP address is >> even or odd. This is done as very basic way to distribute the load. >> >> Now comes the crux of the problem: what happens when an LDAP server >> becomes unavailable for some reason? >> >> If the client is HP-UX (ldapclientd), Solaris (ldap_cachemgr) or RHEL6 >> (nslcd) there is not much of an issue as long as 1 LDAP replica in each >> site is functioning. The specific LDAP-daemon for each platform will >> have a small hiccup while it times out and falls over to the next LDAP >> replica... a few seconds, not a big deal. >> >> If, however, the client is RHEL4 (yes, still!) or RHEL5 then the problem >> is much bigger! On these versions, each process that needs to use LDAP >> must go thru the exact same timeout process - the systems become very >> bogged down, or even unusable depending on the server load. >> >> In one subset of our larger environment (about 40%), we run nscd which >> can help alleviate some of this issue but not all of it. We are planning >> to enable nscd on the remainder very soon - the historical reasoning for >> why those servers do not use nscd is unknown. >> >> Last year, I started investigating and testing the use of LVS (Linux >> Virtual Server) to provide a highly available (aka, clustered), >> load-balanced front-end that would direct client requests for a single >> IP address (per site) to the backend LDAP servers. Results were very >> good, and I proposed this plan to our management. >> >> DENIED! >> >> It was deemed to be "too complex to manage" by our team, and redundant >> to the BigIP F5 service offering with the company. I tend to favor >> self-management of infrastructure components which are critical to >> maintaining system functionality, but what do I know? J >> >> So, we are now looking down the route of using F5 (managed by another >> team) to front-ent our LDAP >> >> But, another option has been proposed: what if we make each linux server >> an LDAP replica that keeps itself up to date with sync-repl and have >> each server use only itself for LDAP services? The setup of this would >> be fairly straightforward, and could be easily integrated into our build >> process. >> >> Since we don't make massive volumes of changes, I feel like the network >> load for LDAP would probably drop significantly, and we don't have to >> worry about many of these other issues. I know that this solves the >> problem only for Linux, but Solaris and HP-UX already handle the problem >> case are are being phased out of our environment. >> >> Anyway, thanks for reading this novel - had not intended to write so >> much, but wanted to set the foundation for my question. >> >> What are you people doing to solve this problem? Are you using F5? Do >> you think the "every server a replica" approach makes sense? >> >> I am posting to both RHEL5 and RHEL6 lists, sorry if you see it twice. >> >> Thanks in advance for your input. >> >> Kevin > > Hi Kevin, > > Full disclosure; I am recommending something I helped design, so I am > biased. :) > > We've created a (totally open source) HA platform based on RHEL 6 for > KVM VMs running in a two-node HA setup. The reason I am mentioning this > is because I think it might directly address your manager's concern > about complexity. We've build a web front-end for managing the HA side > designed to be used by non-IT people. This said, it's a *pure* HA > solution, no load balancing, which might make it ineligible for you. > > Here are the build instructions: > > https://alteeve.ca/w/AN!Cluster_Tutorial_2 > > That is designed for people who want to build things from the ground > up, so the WebUI isn't highlighted very strongly. For that, you can get > a better idea of the (still being written manual) here: > > https://alteeve.ca/w/AN!CDB > > Understanding this might be coming off as spamming, I want to > underline that it's all open code and design. The platform itself has > been field tested for years in mission-critical (mostly manufacturing, > some scientific/imaging) environments. > > One more note; > > The design was built around not touching the guests at all. Beyond > the (optional) virtio block and net drivers, the guests are effectively > oblivious to the HA underneath them. This is nothing special to our > project, of course. This is a general benefit of KVM. We've also tested > this with Solaris 11, FreeBSD (which I don't think you use) and several > flavours of linux and windows. The only thing not tested are true > Unixes, though I suspect you're not concerned about that here. > > Hope this helps offer an option for your predicament. :) > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From KCollins at chevron.com Mon Feb 3 17:10:01 2014 From: KCollins at chevron.com (Collins, Kevin [Contractor Acquisition Program]) Date: Mon, 3 Feb 2014 17:10:01 +0000 Subject: [rhelv6-list] Highly available OpenLDAP In-Reply-To: <52EC4856.7030302@alteeve.ca> References: <6F56410FBED1FC41BCA804E16F594B0B33054720@chvpkw8xmbx05.chvpk.chevrontexaco.net> <52EC4856.7030302@alteeve.ca> Message-ID: <6F56410FBED1FC41BCA804E16F594B0B33057C85@chvpkw8xmbx05.chvpk.chevrontexaco.net> Thanks - I'll take a look at this when I get a bit of time, but unless "KVM" translates directly to "VMWare" it won't happen in our environment, unfortunately. The current "answer" to VMs in our environment is "VMWare", and we don't really have any say beyond that. Kevin -----Original Message----- From: Digimer [mailto:lists at alteeve.ca] Sent: Friday, January 31, 2014 5:05 PM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list; rhelv5-list at redhat.com Cc: Collins, Kevin [Contractor Acquisition Program] Subject: Re: [rhelv6-list] Highly available OpenLDAP On 31/01/14 06:14 PM, Collins, Kevin [Contractor Acquisition Program] wrote: > Hi all, > > I'm looking for a little input on what other folks are > doing to solve a problem we are trying to address. The scenario is as > follows: > > We were an NIS shop for many, many years. Our environment was (and still > is) heavily dependant on NIS, and netgroups in particular, to function > correctly. > > About 5 or 6 years ago we migrated from NIS to LDAP (using RFC2307 to > provide NIS maps via LDAP). The environment at the time consisted of > less than 200 servers (150 in primary site, the rest in a secondary > site), mostly HP-UX with Linux playing the part of "utility" services > (LDAP, DNS, mysql, httpd, VNC). > > We use LDAP only to provide the standard NIS "maps" (with a few small > custom maps, too). > > We maintain our our LDAP servers with the RHEL-provided OpenLDAP, with a > single master in our primary site in conjunction with 2 replica servers > in our primary site and 2 replica servers in our secondary site. > Replication was using the slurpd mechanism (we started on RHEL3). > > Life was good J > > Fast forward to current environment, and a merger with a different Unix > team (and migrating that environment from NIS to LDAP as well). We now > have close to 1000 servers (mix of physical and VM): roughly 400 each > for our 2 primary sites and the rest scattered across another 3 sites. > The mix is now much more heavily Linux (70%), which the remaining 30% > split between HP-UX and Solaris. > > We have increased the number of replicas adding 2 more replicas in each > of the new sites. > > We are still (mostly) using slurpd for replication, although with the > impending migration of our LDAP master from RHEL5 to RHEL6, we must > change to using sync-repl. No problem, as this is (IMO) a much better > replication method and relieves the worries and headaches that occur > when a replica for some reason becomes "broken" for some period of time. > We have already started this migration, and our master now handles both > slurpd (to old replicas) and sync-repl (from new replicas). > > In our environment, each site has is configured to point to LDAP > services by IP address. Two IP addresses per site which are > "load-balanced" by alternating which IP is first and second in the > config files based on whether the last octet of the client IP address is > even or odd. This is done as very basic way to distribute the load. > > Now comes the crux of the problem: what happens when an LDAP server > becomes unavailable for some reason? > > If the client is HP-UX (ldapclientd), Solaris (ldap_cachemgr) or RHEL6 > (nslcd) there is not much of an issue as long as 1 LDAP replica in each > site is functioning. The specific LDAP-daemon for each platform will > have a small hiccup while it times out and falls over to the next LDAP > replica... a few seconds, not a big deal. > > If, however, the client is RHEL4 (yes, still!) or RHEL5 then the problem > is much bigger! On these versions, each process that needs to use LDAP > must go thru the exact same timeout process - the systems become very > bogged down, or even unusable depending on the server load. > > In one subset of our larger environment (about 40%), we run nscd which > can help alleviate some of this issue but not all of it. We are planning > to enable nscd on the remainder very soon - the historical reasoning for > why those servers do not use nscd is unknown. > > Last year, I started investigating and testing the use of LVS (Linux > Virtual Server) to provide a highly available (aka, clustered), > load-balanced front-end that would direct client requests for a single > IP address (per site) to the backend LDAP servers. Results were very > good, and I proposed this plan to our management. > > DENIED! > > It was deemed to be "too complex to manage" by our team, and redundant > to the BigIP F5 service offering with the company. I tend to favor > self-management of infrastructure components which are critical to > maintaining system functionality, but what do I know? J > > So, we are now looking down the route of using F5 (managed by another > team) to front-ent our LDAP > > But, another option has been proposed: what if we make each linux server > an LDAP replica that keeps itself up to date with sync-repl and have > each server use only itself for LDAP services? The setup of this would > be fairly straightforward, and could be easily integrated into our build > process. > > Since we don't make massive volumes of changes, I feel like the network > load for LDAP would probably drop significantly, and we don't have to > worry about many of these other issues. I know that this solves the > problem only for Linux, but Solaris and HP-UX already handle the problem > case are are being phased out of our environment. > > Anyway, thanks for reading this novel - had not intended to write so > much, but wanted to set the foundation for my question. > > What are you people doing to solve this problem? Are you using F5? Do > you think the "every server a replica" approach makes sense? > > I am posting to both RHEL5 and RHEL6 lists, sorry if you see it twice. > > Thanks in advance for your input. > > Kevin Hi Kevin, Full disclosure; I am recommending something I helped design, so I am biased. :) We've created a (totally open source) HA platform based on RHEL 6 for KVM VMs running in a two-node HA setup. The reason I am mentioning this is because I think it might directly address your manager's concern about complexity. We've build a web front-end for managing the HA side designed to be used by non-IT people. This said, it's a *pure* HA solution, no load balancing, which might make it ineligible for you. Here are the build instructions: https://alteeve.ca/w/AN!Cluster_Tutorial_2 That is designed for people who want to build things from the ground up, so the WebUI isn't highlighted very strongly. For that, you can get a better idea of the (still being written manual) here: https://alteeve.ca/w/AN!CDB Understanding this might be coming off as spamming, I want to underline that it's all open code and design. The platform itself has been field tested for years in mission-critical (mostly manufacturing, some scientific/imaging) environments. One more note; The design was built around not touching the guests at all. Beyond the (optional) virtio block and net drivers, the guests are effectively oblivious to the HA underneath them. This is nothing special to our project, of course. This is a general benefit of KVM. We've also tested this with Solaris 11, FreeBSD (which I don't think you use) and several flavours of linux and windows. The only thing not tested are true Unixes, though I suspect you're not concerned about that here. Hope this helps offer an option for your predicament. :) -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From fpicabia at gmail.com Fri Feb 7 17:23:56 2014 From: fpicabia at gmail.com (francis picabia) Date: Fri, 7 Feb 2014 13:23:56 -0400 Subject: [rhelv6-list] Mongodb clients package missing/not appearing Message-ID: Packages installed: libmongodb-2.4.6-1.el6.x86_64 mongodb-server-2.4.6-1.el6.x86_64 I've installed mongodb for a user, and I'm trying to set up the admin user. I need the client, and no such thing is appearing. I've included the Server Optional channel. # yum search mongo Loaded plugins: rhnplugin This system is receiving updates from RHN Classic or RHN Satellite. ===================================================== N/S Matched: mongo ====================================================== python-mongoengine.noarch : A Python Document-Object Mapper for working with MongoDB php-pecl-mongo.x86_64 : PHP MongoDB database driver pymongo.x86_64 : Python driver for MongoDB python-asyncmongo.noarch : An asynchronous Python MongoDB library python-flask-mongoengine.noarch : Flask extension that provides integration with MongoEngine hunspell-mn.noarch : Mongolian hunspell dictionaries libmongodb.x86_64 : MongoDB shared libraries libmongodb-devel.i686 : MongoDB header files libmongodb-devel.x86_64 : MongoDB header files mongodb-server.x86_64 : MongoDB server, sharding server and support scripts mongoose-devel.i686 : Header files and development libraries for mongoose mongoose-devel.x86_64 : Header files and development libraries for mongoose mongoose-lib.i686 : Shared Object for applications that use mongoose embedded mongoose-lib.x86_64 : Shared Object for applications that use mongoose embedded nodejs-mongodb.noarch : A node driver for MongoDB php-Monolog-mongo.noarch : Monolog MongoDB handler php-horde-Horde-Mongo.noarch : Horde Mongo Configuration pymongo-gridfs.x86_64 : Python GridFS driver for MongoDB python-pymongo.x86_64 : Python driver for MongoDB python-pymongo-gridfs.x86_64 : Python GridFS driver for MongoDB autocorr-mn.noarch : Mongolian autocorrection rules eclipse-nls-mn.x86_64 : Eclipse/Babel language pack for Mongolian hyphen-mn.noarch : Mongolian hyphenation rules mongodb.x86_64 : High-performance, schema-free document-oriented database mongoose.x86_64 : An easy-to-use self-sufficient web server Was this forgotten or is there some hidden place where it will turn up? Any pointers other than to install the source packages for this DB server under /usr/local ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpicabia at gmail.com Fri Feb 7 19:40:13 2014 From: fpicabia at gmail.com (francis picabia) Date: Fri, 7 Feb 2014 15:40:13 -0400 Subject: [rhelv6-list] Mongodb clients package missing/not appearing In-Reply-To: References: Message-ID: I've given up on Redhat providing this. The best solution was to uninstall the Redhat packages and use the MongoDB repo as they document on their website. They do provide a clients package as well as other bits. On Fri, Feb 7, 2014 at 1:23 PM, francis picabia wrote: > > Packages installed: > > libmongodb-2.4.6-1.el6.x86_64 > mongodb-server-2.4.6-1.el6.x86_64 > > I've installed mongodb for a user, and I'm trying to > set up the admin user. I need the client, and no > such thing is appearing. I've included the Server > Optional channel. > > # yum search mongo > Loaded plugins: rhnplugin > This system is receiving updates from RHN Classic or RHN Satellite. > ===================================================== N/S Matched: mongo > ====================================================== > python-mongoengine.noarch : A Python Document-Object Mapper for working > with MongoDB > php-pecl-mongo.x86_64 : PHP MongoDB database driver > pymongo.x86_64 : Python driver for MongoDB > python-asyncmongo.noarch : An asynchronous Python MongoDB library > python-flask-mongoengine.noarch : Flask extension that provides > integration with MongoEngine > hunspell-mn.noarch : Mongolian hunspell dictionaries > libmongodb.x86_64 : MongoDB shared libraries > libmongodb-devel.i686 : MongoDB header files > libmongodb-devel.x86_64 : MongoDB header files > mongodb-server.x86_64 : MongoDB server, sharding server and support scripts > mongoose-devel.i686 : Header files and development libraries for mongoose > mongoose-devel.x86_64 : Header files and development libraries for mongoose > mongoose-lib.i686 : Shared Object for applications that use mongoose > embedded > mongoose-lib.x86_64 : Shared Object for applications that use mongoose > embedded > nodejs-mongodb.noarch : A node driver for MongoDB > php-Monolog-mongo.noarch : Monolog MongoDB handler > php-horde-Horde-Mongo.noarch : Horde Mongo Configuration > pymongo-gridfs.x86_64 : Python GridFS driver for MongoDB > python-pymongo.x86_64 : Python driver for MongoDB > python-pymongo-gridfs.x86_64 : Python GridFS driver for MongoDB > autocorr-mn.noarch : Mongolian autocorrection rules > eclipse-nls-mn.x86_64 : Eclipse/Babel language pack for Mongolian > hyphen-mn.noarch : Mongolian hyphenation rules > mongodb.x86_64 : High-performance, schema-free document-oriented database > mongoose.x86_64 : An easy-to-use self-sufficient web server > > Was this forgotten or is there some hidden place where it will turn up? > > Any pointers other than to install the source packages for this DB > server under /usr/local ? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hs at nhn.ou.edu Fri Feb 7 23:06:28 2014 From: hs at nhn.ou.edu (Horst Severini) Date: Fri, 07 Feb 2014 17:06:28 -0600 Subject: [rhelv6-list] Gnome - KDE Desktop switching in RHEL6 In-Reply-To: References: <72F8605A4AEE17448F28CEF3F4E1925F819B4D90EA@MSGRTPCCRC2WIN.DMN1.FMR.COM> <1331913413.30630.YahooMailNeo@web113308.mail.gq1.yahoo.com> <72F8605A4AEE17448F28CEF3F4E1925F819B4D9457@MSGRTPCCRC2WIN.DMN1.FMR.COM> Message-ID: <52F566F4.9000306@nhn.ou.edu> Hi all, we are working on upgrading our desktop cluster from RHEL5 to RHEL6, and we think we have a workable solution that involves bringing up the new machines with ROCKS 6.1 and the RHEL 6.5 ISO, and then doing some post-config via scripts, but when we do that, we only have Gnome available at the login screen, and we can't figure out how to make KDE available as well. We have all the KDE RPMs installed (that I know of), via yum groupinstall "KDE (K Desktop Environment)" and I have compared the list of installed RPMs with one RHEL6 test install we did manually, and I can't figure out what's missing, since everything that looks like it's even remotely related to KDE and desktop switching is already installed, so I don't know why the ROCKS installed machines don't show the desktop switcher box at the bottom of the login screen. I tried googling around, but haven't quite found the right page yet. What are we missing? Thanks a lot, Horst From hugh-brown at uiowa.edu Fri Feb 7 23:41:13 2014 From: hugh-brown at uiowa.edu (Brown, Hugh M) Date: Fri, 7 Feb 2014 23:41:13 +0000 Subject: [rhelv6-list] Gnome - KDE Desktop switching in RHEL6 In-Reply-To: <52F566F4.9000306@nhn.ou.edu> References: <72F8605A4AEE17448F28CEF3F4E1925F819B4D90EA@MSGRTPCCRC2WIN.DMN1.FMR.COM> <1331913413.30630.YahooMailNeo@web113308.mail.gq1.yahoo.com> <72F8605A4AEE17448F28CEF3F4E1925F819B4D9457@MSGRTPCCRC2WIN.DMN1.FMR.COM> <52F566F4.9000306@nhn.ou.edu> Message-ID: <4CBFC57AEB90DE4A809D02F51B6B2212022BD63A@itsnt426.iowa.uiowa.edu> -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Horst Severini Sent: Friday, February 07, 2014 5:06 PM To: rhelv6-list at redhat.com Subject: [rhelv6-list] Gnome - KDE Desktop switching in RHEL6 Hi all, we are working on upgrading our desktop cluster from RHEL5 to RHEL6, and we think we have a workable solution that involves bringing up the new machines with ROCKS 6.1 and the RHEL 6.5 ISO, and then doing some post-config via scripts, but when we do that, we only have Gnome available at the login screen, and we can't figure out how to make KDE available as well. We have all the KDE RPMs installed (that I know of), via yum groupinstall "KDE (K Desktop Environment)" and I have compared the list of installed RPMs with one RHEL6 test install we did manually, and I can't figure out what's missing, since everything that looks like it's even remotely related to KDE and desktop switching is already installed, so I don't know why the ROCKS installed machines don't show the desktop switcher box at the bottom of the login screen. I tried googling around, but haven't quite found the right page yet. What are we missing? Thanks a lot, Horst gdm uses the *.desktop files in /usr/share/xsessions to determine which choices to make available at login time. /usr/share/xsessions/kdm.desktop is owned by the kdebase-workspace rpm which is part of the kde-desktop yum group. If the file is there, I'd make sure the permissions are correct on it. Hugh From gianluca.cecchi at gmail.com Sat Feb 8 10:30:06 2014 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sat, 8 Feb 2014 11:30:06 +0100 Subject: [rhelv6-list] Gnome - KDE Desktop switching in RHEL6 In-Reply-To: <52F566F4.9000306@nhn.ou.edu> References: <72F8605A4AEE17448F28CEF3F4E1925F819B4D90EA@MSGRTPCCRC2WIN.DMN1.FMR.COM> <1331913413.30630.YahooMailNeo@web113308.mail.gq1.yahoo.com> <72F8605A4AEE17448F28CEF3F4E1925F819B4D9457@MSGRTPCCRC2WIN.DMN1.FMR.COM> <52F566F4.9000306@nhn.ou.edu> Message-ID: On Sat, Feb 8, 2014 at 12:06 AM, Horst Severini wrote: > Hi all, > > we are working on upgrading our desktop cluster from RHEL5 to RHEL6, and we > think we have a workable solution that involves bringing up the new machines > with ROCKS 6.1 and the RHEL 6.5 ISO, and then doing some post-config via > scripts, but when we do that, we only have Gnome available at the login > screen, and we can't figure out how to make KDE available as well. > > We have all the KDE RPMs installed (that I know of), via > > yum groupinstall "KDE (K Desktop Environment)" > > and I have compared the list of installed RPMs with one RHEL6 test install > we did manually, and I can't figure out what's missing, since everything > that looks like it's even remotely related to KDE and desktop switching is > already installed, so I don't know why the ROCKS installed machines don't > show the desktop switcher box at the bottom of the login screen. > > I tried googling around, but haven't quite found the right page yet. > What are we missing? What worked for me on a CentOS 6.5 install where I had only Gnome and init level was 5 by default: sudo yum groupinstall "KDE Desktop" verify you have kdm executable (in practice the kdm rpm package) vi /etc/sysconfig/desktop (is not present by default) put into it the single line DISPLAYMANAGER=KDE $ sudo init 3 $ sudo init 5 and I get kde diplay manager. take into account that display manager in 6.5 is managed by upstart init script prefdm.conf in /etc/init directory and at the end it runs exec /etc/X11/prefdm -nodaemon and prefdm contains: if [ -f /etc/sysconfig/desktop ]; then . /etc/sysconfig/desktop if [ "$DISPLAYMANAGER" = GNOME ]; then preferred=/usr/sbin/gdm quit_arg="--retain-splash" elif [ "$DISPLAYMANAGER" = KDE ]; then preferred=/usr/bin/kdm .... Otherwise some lines below it contains: # Fallbacks, in order exec gdm "$@" >/dev/null 2>&1 /dev/null 2>&1 References: Message-ID: <20140210095736.660fad89@r2d2.marmotte.net> On Fri, 7 Feb 2014 15:40:13 -0400 francis picabia wrote: > I've given up on Redhat providing this. The best solution was to > uninstall the Redhat packages and use the MongoDB repo as they > document on their website. They do provide a clients package as well > as other bits. >From your listing (below), the client is in the "mongodb" package (second to last). The "mongodb-server" is only the server. Both pull in the requires libs. Matthias > On Fri, Feb 7, 2014 at 1:23 PM, francis picabia > wrote: > > > > > Packages installed: > > > > libmongodb-2.4.6-1.el6.x86_64 > > mongodb-server-2.4.6-1.el6.x86_64 > > > > I've installed mongodb for a user, and I'm trying to > > set up the admin user. I need the client, and no > > such thing is appearing. I've included the Server > > Optional channel. > > > > # yum search mongo > > Loaded plugins: rhnplugin > > This system is receiving updates from RHN Classic or RHN Satellite. > > ===================================================== N/S Matched: > > mongo ====================================================== > > python-mongoengine.noarch : A Python Document-Object Mapper for > > working with MongoDB > > php-pecl-mongo.x86_64 : PHP MongoDB database driver > > pymongo.x86_64 : Python driver for MongoDB > > python-asyncmongo.noarch : An asynchronous Python MongoDB library > > python-flask-mongoengine.noarch : Flask extension that provides > > integration with MongoEngine > > hunspell-mn.noarch : Mongolian hunspell dictionaries > > libmongodb.x86_64 : MongoDB shared libraries > > libmongodb-devel.i686 : MongoDB header files > > libmongodb-devel.x86_64 : MongoDB header files > > mongodb-server.x86_64 : MongoDB server, sharding server and support > > scripts mongoose-devel.i686 : Header files and development > > libraries for mongoose mongoose-devel.x86_64 : Header files and > > development libraries for mongoose mongoose-lib.i686 : Shared > > Object for applications that use mongoose embedded > > mongoose-lib.x86_64 : Shared Object for applications that use > > mongoose embedded > > nodejs-mongodb.noarch : A node driver for MongoDB > > php-Monolog-mongo.noarch : Monolog MongoDB handler > > php-horde-Horde-Mongo.noarch : Horde Mongo Configuration > > pymongo-gridfs.x86_64 : Python GridFS driver for MongoDB > > python-pymongo.x86_64 : Python driver for MongoDB > > python-pymongo-gridfs.x86_64 : Python GridFS driver for MongoDB > > autocorr-mn.noarch : Mongolian autocorrection rules > > eclipse-nls-mn.x86_64 : Eclipse/Babel language pack for Mongolian > > hyphen-mn.noarch : Mongolian hyphenation rules > > mongodb.x86_64 : High-performance, schema-free document-oriented > > database mongoose.x86_64 : An easy-to-use self-sufficient web server > > > > Was this forgotten or is there some hidden place where it will turn > > up? > > > > Any pointers other than to install the source packages for this DB > > server under /usr/local ? > > > > > > -- Matthias Saou ?? ?? ?? ?? Web: http://matthias.saou.eu/ ?????????????? Mail/XMPP: matthias at saou.eu ???? ?????? ???? ?????????????????????? GPG: 4096R/E755CC63 ?? ?????????????? ?? 8D91 7E2E F048 9C9C 46AF ?? ?? ?? ?? 21A9 7A51 7B82 E755 CC63 ???? ???? From hs at nhn.ou.edu Mon Feb 10 23:43:20 2014 From: hs at nhn.ou.edu (Horst Severini) Date: Mon, 10 Feb 2014 17:43:20 -0600 Subject: [rhelv6-list] Gnome - KDE Desktop switching in RHEL6 In-Reply-To: References: <72F8605A4AEE17448F28CEF3F4E1925F819B4D90EA@MSGRTPCCRC2WIN.DMN1.FMR.COM> <1331913413.30630.YahooMailNeo@web113308.mail.gq1.yahoo.com> <72F8605A4AEE17448F28CEF3F4E1925F819B4D9457@MSGRTPCCRC2WIN.DMN1.FMR.COM> <52F566F4.9000306@nhn.ou.edu> Message-ID: <201402102343.s1ANhKQ4018692@particle.nhn.ou.edu> Hi Hugh and Gianluca, > gdm uses the *.desktop files in /usr/share/xsessions to determine > which choices to make available at login time. > /usr/share/xsessions/kdm.desktop is owned by the kdebase-workspace rpm > which is part of the kde-desktop yum group. > > If the file is there, I'd make sure the permissions are correct on it. Hmm, that file exists and has the same ownership and permissions as gnome.desktop, so I don't think that's the problem. But after I created /etc/sysconfig/desktop with DISPLAYMANAGER=KDE and restart X, then it did come up with KDM instead of GDM, and there I do have the options to switch back and forward between KDE and Gnome, so that's great! I still don't understand why I don't see the option to switch in GDM, but hey, at least I have a working setup now! Thanks a lot, Horst Gianluca Cecchi wrote: > On Sat, Feb 8, 2014 at 12:06 AM, Horst Severini wrote: > > Hi all, > > > > we are working on upgrading our desktop cluster from RHEL5 to RHEL6, and we > > think we have a workable solution that involves bringing up the new machines > > with ROCKS 6.1 and the RHEL 6.5 ISO, and then doing some post-config via > > scripts, but when we do that, we only have Gnome available at the login > > screen, and we can't figure out how to make KDE available as well. > > > > We have all the KDE RPMs installed (that I know of), via > > > > yum groupinstall "KDE (K Desktop Environment)" > > > > and I have compared the list of installed RPMs with one RHEL6 test install > > we did manually, and I can't figure out what's missing, since everything > > that looks like it's even remotely related to KDE and desktop switching is > > already installed, so I don't know why the ROCKS installed machines don't > > show the desktop switcher box at the bottom of the login screen. > > > > I tried googling around, but haven't quite found the right page yet. > > What are we missing? > > > What worked for me on a CentOS 6.5 install where I had only Gnome and > init level was 5 by default: > > sudo yum groupinstall "KDE Desktop" > > verify you have kdm executable (in practice the kdm rpm package) > vi /etc/sysconfig/desktop (is not present by default) > > put into it the single line > DISPLAYMANAGER=KDE > > $ sudo init 3 > $ sudo init 5 > > and I get kde diplay manager. > > take into account that display manager in 6.5 is managed by upstart > init script prefdm.conf in /etc/init directory > > and at the end it runs > exec /etc/X11/prefdm -nodaemon > > and prefdm contains: > > if [ -f /etc/sysconfig/desktop ]; then > . /etc/sysconfig/desktop > if [ "$DISPLAYMANAGER" = GNOME ]; then > preferred=/usr/sbin/gdm > quit_arg="--retain-splash" > elif [ "$DISPLAYMANAGER" = KDE ]; then > preferred=/usr/bin/kdm > .... > > Otherwise some lines below it contains: > # Fallbacks, in order > exec gdm "$@" >/dev/null 2>&1 exec kdm "$@" >/dev/null 2>&1 > so you can comment out the one starting with gdm and it will start kdm > if it finds it... > > If you like a display manager not managed by prefdm you have to create > a prefdm.override in /etc/init and put there your display manager > commands/options > > HIH, > Gianluca > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list From huruomu at gmail.com Tue Feb 11 09:20:30 2014 From: huruomu at gmail.com (Romu) Date: Tue, 11 Feb 2014 17:20:30 +0800 Subject: [rhelv6-list] Kickstart for RHEL7 full install Message-ID: Sorry I didn't find any mailing list for RHEL7. Has anyone succeeded in RHEL7 kickstart installation with all packages selected? I use this for RHEL6 x86_64: %packages * *.i686 - at Conflicts (Server) %end I tried the same for RHEL7, turns out RHEL7 doesn't recognize "- at Conflicts (Server)", if I remove the line from kickstart, the installation program will report many file conflicts and installation can't proceed. Any idea? Thanks Romu -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.Grierson at bskyb.com Tue Feb 11 09:43:09 2014 From: David.Grierson at bskyb.com (Grierson, David) Date: Tue, 11 Feb 2014 09:43:09 +0000 Subject: [rhelv6-list] Kickstart for RHEL7 full install In-Reply-To: References: Message-ID: <841FE3240F12C441AF08EABCD5B00AFB48686CEF@CHIXMBX04.bskyb.com> You could try performing a full install manually with all of the packages selected and then having a look at the generated anaconda-ks.cfg kickstart file saved in /root. Dg. ________________________________ -- David Grierson - SDLC Tools Specialist Sky Broadcasting - Customer Business Systems - SDLC Tools Tel: +44 1506 325100 / Email: David.Grierson at bskyb.com / Chatter: CBS SDLC Tools Watermark Building, Alba Campus, Livingston, EH54 7HH From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Romu Sent: 11 February 2014 09:21 To: rhelv6-list at redhat.com Subject: [rhelv6-list] Kickstart for RHEL7 full install Sorry I didn't find any mailing list for RHEL7. Has anyone succeeded in RHEL7 kickstart installation with all packages selected? I use this for RHEL6 x86_64: %packages * *.i686 - at Conflicts (Server) %end I tried the same for RHEL7, turns out RHEL7 doesn't recognize "- at Conflicts (Server)", if I remove the line from kickstart, the installation program will report many file conflicts and installation can't proceed. Any idea? Thanks Romu Information in this email including any attachments may be privileged, confidential and is intended exclusively for the addressee. The views expressed may not be official policy, but the personal views of the originator. If you have received it in error, please notify the sender by return e-mail and delete it from your system. You should not reproduce, distribute, store, retransmit, use or disclose its contents to anyone. Please note we reserve the right to monitor all e-mail communication through our internal and external networks. SKY and the SKY marks are trademarks of British Sky Broadcasting Group plc and Sky International AG and are used under licence. British Sky Broadcasting Limited (Registration No. 2906991), Sky-In-Home Service Limited (Registration No. 2067075) and Sky Subscribers Services Limited (Registration No. 2340150) are direct or indirect subsidiaries of British Sky Broadcasting Group plc (Registration No. 2247735). All of the companies mentioned in this paragraph are incorporated in England and Wales and share the same registered office at Grant Way, Isleworth, Middlesex TW7 5QD. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandro.Roth at zurich-airport.com Tue Feb 11 10:29:31 2014 From: Sandro.Roth at zurich-airport.com (Roth, Sandro) Date: Tue, 11 Feb 2014 10:29:31 +0000 Subject: [rhelv6-list] Software Collections Message-ID: Hi I'm trying to subscribe a system to the redhat software collections repo. The system is up to date RHEL 6.5 and registered using subscription management. # subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current I have requested access to SCL via https://www.redhat.com/GetRedHatSoftwareCollections.html and got a response saying we already have access to it. # yum-config-manager --enable rhel-server-rhscl-6-rpms ... does nothing. # subscription-manager repos | grep scl ... returns nothing. I guess I'm missing a step? Any help would be appreciated :) Thanks Sandro Sandro Roth Systems Engineer IT-Operations Flughafen Z?rich AG Postfach CH-8058 Z?rich-Flughafen www.flughafen-zuerich.ch This email message and any attachments are confidential and may be privileged. If you are not the intended recipient, please notify us immediately and destroy the original transmittal. You are hereby notified that any review, copying or distribution of it is strictly prohibited. Thank you for your cooperation. Header information contained in E-mails to and from the company are monitored for operational reasons in accordance with swiss data protection act. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hs at nhn.ou.edu Tue Feb 11 22:42:51 2014 From: hs at nhn.ou.edu (Horst Severini) Date: Tue, 11 Feb 2014 16:42:51 -0600 Subject: [rhelv6-list] Gnome - KDE Desktop switching in RHEL6 In-Reply-To: <201402102343.s1ANhKQ4018692@particle.nhn.ou.edu> References: <72F8605A4AEE17448F28CEF3F4E1925F819B4D90EA@MSGRTPCCRC2WIN.DMN1.FMR.COM> <1331913413.30630.YahooMailNeo@web113308.mail.gq1.yahoo.com> <72F8605A4AEE17448F28CEF3F4E1925F819B4D9457@MSGRTPCCRC2WIN.DMN1.FMR.COM> <52F566F4.9000306@nhn.ou.edu> <201402102343.s1ANhKQ4018692@particle.nhn.ou.edu> Message-ID: <201402112242.s1BMgp4m008675@particle.nhn.ou.edu> Hi again, quick update: it turns out that the problem I thought I had didn't actually exist ... !!! !@#$%^&* The issue seems to be that only after you enter a username on the GDM login screen and press Enter -- only THEN does the desktop switcher tool appear at the bottom of the screen!!! I realized that today when I logged in to one of the machines I hadn't switched yet. Is that really necessary to hide this option like that, which made me waste a week's worth of trying to figure out a problem that doesn't even exist? Hrmph ... :\ Cheers, Horst Horst Severini wrote: > Hi Hugh and Gianluca, > > > gdm uses the *.desktop files in /usr/share/xsessions to determine > > which choices to make available at login time. > > /usr/share/xsessions/kdm.desktop is owned by the kdebase-workspace rpm > > which is part of the kde-desktop yum group. > > > > If the file is there, I'd make sure the permissions are correct on it. > > Hmm, that file exists and has the same ownership and permissions as > gnome.desktop, so I don't think that's the problem. > > But after I created /etc/sysconfig/desktop with > > DISPLAYMANAGER=KDE > > and restart X, then it did come up with KDM instead of GDM, > and there I do have the options to switch back and forward > between KDE and Gnome, so that's great! > > I still don't understand why I don't see the option to switch in GDM, > but hey, at least I have a working setup now! > > Thanks a lot, > > Horst > > Gianluca Cecchi wrote: > > > On Sat, Feb 8, 2014 at 12:06 AM, Horst Severini wrote: > > > Hi all, > > > > > > we are working on upgrading our desktop cluster from RHEL5 to RHEL6, and we > > > think we have a workable solution that involves bringing up the new machines > > > with ROCKS 6.1 and the RHEL 6.5 ISO, and then doing some post-config via > > > scripts, but when we do that, we only have Gnome available at the login > > > screen, and we can't figure out how to make KDE available as well. > > > > > > We have all the KDE RPMs installed (that I know of), via > > > > > > yum groupinstall "KDE (K Desktop Environment)" > > > > > > and I have compared the list of installed RPMs with one RHEL6 test install > > > we did manually, and I can't figure out what's missing, since everything > > > that looks like it's even remotely related to KDE and desktop switching is > > > already installed, so I don't know why the ROCKS installed machines don't > > > show the desktop switcher box at the bottom of the login screen. > > > > > > I tried googling around, but haven't quite found the right page yet. > > > What are we missing? > > > > > > What worked for me on a CentOS 6.5 install where I had only Gnome and > > init level was 5 by default: > > > > sudo yum groupinstall "KDE Desktop" > > > > verify you have kdm executable (in practice the kdm rpm package) > > vi /etc/sysconfig/desktop (is not present by default) > > > > put into it the single line > > DISPLAYMANAGER=KDE > > > > $ sudo init 3 > > $ sudo init 5 > > > > and I get kde diplay manager. > > > > take into account that display manager in 6.5 is managed by upstart > > init script prefdm.conf in /etc/init directory > > > > and at the end it runs > > exec /etc/X11/prefdm -nodaemon > > > > and prefdm contains: > > > > if [ -f /etc/sysconfig/desktop ]; then > > . /etc/sysconfig/desktop > > if [ "$DISPLAYMANAGER" = GNOME ]; then > > preferred=/usr/sbin/gdm > > quit_arg="--retain-splash" > > elif [ "$DISPLAYMANAGER" = KDE ]; then > > preferred=/usr/bin/kdm > > .... > > > > Otherwise some lines below it contains: > > # Fallbacks, in order > > exec gdm "$@" >/dev/null 2>&1 > exec kdm "$@" >/dev/null 2>&1 > > > so you can comment out the one starting with gdm and it will start kdm > > if it finds it... > > > > If you like a display manager not managed by prefdm you have to create > > a prefdm.override in /etc/init and put there your display manager > > commands/options > > > > HIH, > > Gianluca > > > > _______________________________________________ > > rhelv6-list mailing list > > rhelv6-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhelv6-list From tde3000 at gmail.com Wed Feb 12 14:14:06 2014 From: tde3000 at gmail.com (John Stein) Date: Wed, 12 Feb 2014 16:14:06 +0200 Subject: [rhelv6-list] Configuring a loopback adapter Message-ID: Hi all, I am trying to configure some rhel 6.2 servers to work with a WSD load balancer. What I have in each servers ifcfg-lo:1 file: Device=lo:1 Ipaddr=192.168.10.100 Netmask=255.255.255.255 Network=192.168.10.0 Broadcast=192.168.10.255 Onboot=yes I've also tried changing the arp_ignore parameters to 1 for the loopback adapter like I've found in the net. It's still not working. Did I miss anything? Is there a way I can check if it's my fault or the WSD fault? Thanks, John. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eng-partner-management at redhat.com Wed Feb 12 17:47:25 2014 From: eng-partner-management at redhat.com (Engineering Partner Management) Date: Wed, 12 Feb 2014 12:47:25 -0500 Subject: [rhelv6-list] Red Hat Developer Toolset 2.1 Beta Now Available for Testing Message-ID: <52FBB3AD.6010101@redhat.com> Red Hat is pleased to announce the Beta availability of Red Hat Developer Toolset 2.1. This latest version bridges development agility with production stability by delivering the latest stable versions of essential open development tools to enhance developer productivity and improve deployment times. Red Hat Developer Toolset 2.1 Beta introduces a new tool to its content set ? Git 1.8.4 ? and updates key packages to help developers deliver new applications and functionality faster. Red Hat Developer Toolset enables C and C++ developers to compile once and deploy to multiple versions of Red Hat Enterprise Linux on physical, virtual, and cloud environments. Moreover, Red Hat Developer Toolset can be used to develop applications for deployment on Red Hat Enterprise Linux and on OpenShift, offering customers exceptional flexibility. Red Hat Developer Toolset 2.1 Beta delivers the following capabilities: * Users can compile on Red Hat Enterprise Linux 6 to run on Red Hat Enterprise Linux 6 and test on Red Hat Enterprise Linux 7 Beta. In addition, the Red Hat Developer Toolset retains functionality allowing users to compile on Red Hat Enterprise Linux 5 and deploy on Red Hat Enterprise Linux 5 or Red Hat Enterprise Linux 6. * Git 1.8.4. Git 1.8.4 provides developers using Red Hat Enterprise Linux 6 with a new release of the powerful distributed version control system designed to handle projects of any size with speed and efficiency. * General updates, inclusive of minor bug fixes and feature enhancements, to the following packages: GCC 4.8.2, Eclipse 4.3.1, Dyninst 8.1.1, and elfutils 0.157. Developers can code with confidence knowing that they?re using stable, recently updated packages. In addition to these enhancements, Red Hat Developer Toolset 2.1 Beta also continues to deliver software collections functionality, which enables the concurrent installation of multiple versions of the same software components on a system. RED HAT DEVELOPER TOOLSET 2.1 BETA AVAILABILITY Red Hat Developer Toolset 2.1 Beta is available now to customers with an active Red Hat Enterprise Linux developer-related subscription. This includes: * Red Hat Enterprise Linux Developer Suite[1] * Red Hat Enterprise Linux Developer Support Subscriptions[2] * Red Hat Enterprise Linux Developer Workstation[3] * Most Red Hat Enterprise Linux Academic Subscriptions * NFR subscriptions for qualifying Partners To learn more about Red Hat developer offerings, visit the Red Hat Enterprise Linux Developer Program page at https://access.redhat.com/products/Red_Hat_Enterprise_Linux/Developer/ . PARTICIPATING IN THE RED HAT DEVELOPER TOOLSET 2.1 BETA To install the Red Hat Developer Toolset 2.1 Beta, please visit: https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/2-Beta/html/User_Guide/ ADDITIONAL RESOURCES To access additional documentation for Red Hat Developer Toolset, visit: * Red Hat Developer Toolset 2.1 Beta documentation: - Red Hat Developer Toolset 2.1 Beta Release notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/2-Beta/html/2.1_Release_Notes/ - Red Hat Developer Toolset 2.1 Beta User Guide: https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/2-Beta/html/User_Guide/ - Red Hat Software Collections Guide: https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/2-Beta/html/Software_Collections_Guide/ * Other Red Hat Enterprise Linux documentation: https://access.redhat.com/knowledge/docs/Red_Hat_Enterprise_Linux/ Sincerely, The Red Hat Enterprise Linux Team [1] - https://www.redhat.com/apps/store/developers/rhel_developer_suite.html [2] - https://www.redhat.com/apps/store/developers/rhel_developer_support_professional.html [3] - https://www.redhat.com/apps/store/developers/rhel_developer_workstation_professional.html From huruomu at gmail.com Thu Feb 13 07:08:27 2014 From: huruomu at gmail.com (Romu) Date: Thu, 13 Feb 2014 15:08:27 +0800 Subject: [rhelv6-list] RHEL7 NIC naming convention Message-ID: RHEL7 uses different names for NICs, in RHEL4/5/6 it's always eth0/1/2/3..., but in RHEL7 I've seen em1/2, ens3... What is the rule here? How can I know whether the NIC name will be em or ens? Thanks Romu -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.j.smith at ieee.org Thu Feb 13 07:28:10 2014 From: b.j.smith at ieee.org (Bryan J Smith) Date: Thu, 13 Feb 2014 02:28:10 -0500 Subject: [rhelv6-list] RHEL7 NIC naming convention In-Reply-To: References: Message-ID: On Thu, Feb 13, 2014 at 2:08 AM, Romu wrote: > RHEL7 uses different names for NICs, > in RHEL4/5/6 it's always eth0/1/2/3..., This is incorrect. RHEL6.1+ introduced [1a] [1b] the biosdevname option which lets the firmware define how devices are named. It can be disabled by not installing the package "biosdevname" and/or by removing entries from, or the file, "/etc/udev/rules.d/70-persistent-net.rules" or any related file "/etc/udev/rules.d/*-biosdevname.rules". In-a-nutshell ... a lot of PC OEMs were asking for this, and Dell (among others) was behind it, to avoid enumeration or confusion between Ethernet-on-Motherboard (em#) and PCI device ports (p#p#). There are two of the most common naming conventions, but they are ultimately left up to the OEMs. -- bjs [1a] https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/6.1_Release_Notes/index.html#idp95564240 [1b] https://access.redhat.com/site/articles/53579 > but in RHEL7 I've seen em1/2, ens3... What is the rule here? How can I > know whether the NIC name will be em or ens? The binary included with the package reads from firmware/memory to set the names. There are several guides out there on finding the appropriate identification for a board. -- Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith ----------------------------------------------------------------- "In a way, Bortles is the personification of the UCF football program. Each has many of the elements that everyone claims to want, and yet they are nobody's first choice. Coming out of high school, Bortles had the size and the arm to play at a more prestigious program. UCF likewise has the market size and the talent base to play in a more prestigious conference than the American Athletic. But timing and circumstances conspired to put both where they are now." -- Andy Staples, CNN-Sports Illustrated From gianluca.cecchi at gmail.com Thu Feb 13 07:32:53 2014 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 13 Feb 2014 08:32:53 +0100 Subject: [rhelv6-list] RHEL7 NIC naming convention In-Reply-To: References: Message-ID: On Thu, Feb 13, 2014 at 8:08 AM, Romu wrote: > RHEL7 uses different names for NICs, in RHEL4/5/6 it's always eth0/1/2/3..., > but in RHEL7 I've seen em1/2, ens3... What is the rule here? How can I > know whether the NIC name will be em or ens? > It can be useful reading from here from which RHEL 7 inherits: http://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/appe-Consistent_Network_Device_Naming.html Gianluca From huruomu at gmail.com Thu Feb 13 08:53:04 2014 From: huruomu at gmail.com (Romu) Date: Thu, 13 Feb 2014 16:53:04 +0800 Subject: [rhelv6-list] Kickstart for RHEL7 full install In-Reply-To: <841FE3240F12C441AF08EABCD5B00AFB48686CEF@CHIXMBX04.bskyb.com> References: <841FE3240F12C441AF08EABCD5B00AFB48686CEF@CHIXMBX04.bskyb.com> Message-ID: Not exactly what I want but it solves the problem. Thanks! Romu 2014-02-11 17:43 GMT+08:00 Grierson, David : > You could try performing a full install manually with all of the > packages selected and then having a look at the generated anaconda-ks.cfg > kickstart file saved in /root. > > > > Dg. > ------------------------------ > > -- > *David Grierson* - SDLC > Tools Specialist > > Sky Broadcasting - Customer Business Systems - SDLC Tools > > Tel: +44 1506 325100 / Email: David.Grierson at bskyb.com / Chatter: CBS > SDLC Tools > > Watermark Building, Alba Campus, Livingston, EH54 7HH > > > > *From:* rhelv6-list-bounces at redhat.com [mailto: > rhelv6-list-bounces at redhat.com] *On Behalf Of *Romu > *Sent:* 11 February 2014 09:21 > *To:* rhelv6-list at redhat.com > *Subject:* [rhelv6-list] Kickstart for RHEL7 full install > > > > Sorry I didn't find any mailing list for RHEL7. > > > > Has anyone succeeded in RHEL7 kickstart installation with all packages > selected? I use this for RHEL6 x86_64: > > > > %packages > > * > > *.i686 > > - at Conflicts (Server) > > %end > > > > I tried the same for RHEL7, turns out RHEL7 doesn't recognize "- at Conflicts > (Server)", if I remove the line from kickstart, the installation program > will report many file conflicts and installation can't proceed. > > > > Any idea? > > > > > > Thanks > > Romu > > Information in this email including any attachments may be privileged, > confidential and is intended exclusively for the addressee. The views > expressed may not be official policy, but the personal views of the > originator. If you have received it in error, please notify the sender by > return e-mail and delete it from your system. You should not reproduce, > distribute, store, retransmit, use or disclose its contents to anyone. > Please note we reserve the right to monitor all e-mail communication > through our internal and external networks. SKY and the SKY marks are > trademarks of British Sky Broadcasting Group plc and Sky International AG > and are used under licence. British Sky Broadcasting Limited (Registration > No. 2906991), Sky-In-Home Service Limited (Registration No. 2067075) and > Sky Subscribers Services Limited (Registration No. 2340150) are direct or > indirect subsidiaries of British Sky Broadcasting Group plc (Registration > No. 2247735). All of the companies mentioned in this paragraph are > incorporated in England and Wales and share the same registered office at > Grant Way, Isleworth, Middlesex TW7 5QD. > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.j.smith at ieee.org Thu Feb 13 09:37:25 2014 From: b.j.smith at ieee.org (Bryan J Smith) Date: Thu, 13 Feb 2014 04:37:25 -0500 Subject: [rhelv6-list] Kickstart for RHEL7 full install In-Reply-To: References: Message-ID: On Tue, Feb 11, 2014 at 4:20 AM, Romu wrote: > Sorry I didn't find any mailing list for RHEL7. The Customer Access Portal is being utilized for collaboration on the RHEL 7 Beta. If you don't have an account, you can create one (before adding entitlements), or create one as part of an evaluation request (which will include entitlements). [1] Much of the portal does not require a login, or even entitlements. Other portions of the Red Hat collaboration ecosystem, such as Bugzilla, continue to be available with a separate login. FYI, there was a discussion on the rhel6-beta-list several weeks ago, starting with one post trying to subscript [1a] to the last one made [1z] (by myself no less). In-a-nutshell, people can collaborate where ever they want -- even on this list. There was even a suggestion to create a general discussion list for all RHEL releases, all RHEL Betas, etc... But engineering, Specialty Based Routing (SBR), etc... is most efficient via Bugzilla, the Customer Access Portal, etc... and other avenues Red Hat SMEs frequent. I.e., it's getting to the point there's a lot with the RHEL platform ... I think up to 50 add-ons now? There is also the public Anaconda Development list too [4], where you could make a maintainer aware. > Has anyone succeeded in RHEL7 kickstart installation with all packages > selected? I use this for RHEL6 x86_64: > %packages > * > *.i686 > - at Conflicts (Server) > %end > I tried the same for RHEL7, turns out RHEL7 doesn't recognize "- at Conflicts > (Server)", if I remove the line from kickstart, the installation program > will report many file conflicts and installation can't proceed. > Any idea? It's still documented as a _valid_ option in the RHEL7 Beta Installation Guide, Section 16.5 [3], no less. Sounds like either a documentation bug, or an Anaconda bug. Off-to-Bugzilla ... my search came up empty [5]. I'd file either a documentation or RHEL7 component "Anaconda" bug. -- bjs References: [1] https://access.redhat.com/site/products/Red_Hat_Enterprise_Linux/Get-Beta [2a] https://www.redhat.com/archives/rhelv6-beta-list/2013-December/msg00004.html [2z] https://www.redhat.com/archives/rhelv6-beta-list/2013-December/msg00004.html [3] https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html-single/Installation_Guide/index.html#s1-kickstart2-packageselection [4] http://www.redhat.com/mailman/listinfo/anaconda-devel-list [5] https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&bug_status=MODIFIED&bug_status=ON_DEV&bug_status=ON_QA&bug_status=VERIFIED&bug_status=RELEASE_PENDING&bug_status=CLOSED&classification=Red%20Hat&component=anaconda&component=anaconda-help&component=anaconda-images&component=anaconda-product&component=anaconda-yum-plugins&product=Red%20Hat%20Enterprise%20Linux%207&query_format=advanced&short_desc=Conflicts&short_desc_type=allwordssubstr -- Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith ----------------------------------------------------------------- "In a way, Bortles is the personification of the UCF football program. Each has many of the elements that everyone claims to want, and yet they are nobody's first choice. Coming out of high school, Bortles had the size and the arm to play at a more prestigious program. UCF likewise has the market size and the talent base to play in a more prestigious conference than the American Athletic. But timing and circumstances conspired to put both where they are now." -- Andy Staples, CNN-Sports Illustrated From john.haxby at gmail.com Thu Feb 13 21:51:57 2014 From: john.haxby at gmail.com (John Haxby) Date: Thu, 13 Feb 2014 21:51:57 +0000 Subject: [rhelv6-list] Configuring a loopback adapter In-Reply-To: References: Message-ID: On 12 February 2014 14:14, John Stein wrote: > I am trying to configure some rhel 6.2 servers to work with a WSD load > balancer. > > What I have in each servers ifcfg-lo:1 file: > > Device=lo:1 > Ipaddr=192.168.10.100 > Netmask=255.255.255.255 > Network=192.168.10.0 > Broadcast=192.168.10.255 > Onboot=yes > > I've also tried changing the arp_ignore parameters to 1 for the loopback > adapter like I've found in the net. > It's still not working. > Is that a copy-and-paste problem or is it really Device, Ipaddr, etc instead of DEVICE, IPADDR, etc? jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From huruomu at gmail.com Fri Feb 14 01:56:05 2014 From: huruomu at gmail.com (Romu) Date: Fri, 14 Feb 2014 09:56:05 +0800 Subject: [rhelv6-list] Kickstart for RHEL7 full install In-Reply-To: References: Message-ID: 2014-02-13 17:37 GMT+08:00 Bryan J Smith : > On Tue, Feb 11, 2014 at 4:20 AM, Romu wrote: > > Sorry I didn't find any mailing list for RHEL7. > > The Customer Access Portal is being utilized for collaboration on the > RHEL 7 Beta. > > If you don't have an account, you can create one (before adding > entitlements), or create one as part of an evaluation request (which > will include entitlements). [1] Much of the portal does not require a > login, or even entitlements. Other portions of the Red Hat > collaboration ecosystem, such as Bugzilla, continue to be available > with a separate login. > > FYI, there was a discussion on the rhel6-beta-list several weeks ago, > starting with one post trying to subscript [1a] to the last one made > [1z] (by myself no less). In-a-nutshell, people can collaborate where > ever they want -- even on this list. There was even a suggestion to > create a general discussion list for all RHEL releases, all RHEL > Betas, etc... > > But engineering, Specialty Based Routing (SBR), etc... is most > efficient via Bugzilla, the Customer Access Portal, etc... and other > avenues Red Hat SMEs frequent. I.e., it's getting to the point > there's a lot with the RHEL platform ... I think up to 50 add-ons now? > There is also the public Anaconda Development list too [4], where you > could make a maintainer aware. > Thank you very much for the information. I'll try to use Customer Access Portal for RHEL7 related issues. > > Has anyone succeeded in RHEL7 kickstart installation with all packages > > selected? I use this for RHEL6 x86_64: > > %packages > > * > > *.i686 > > - at Conflicts (Server) > > %end > > I tried the same for RHEL7, turns out RHEL7 doesn't recognize > "- at Conflicts > > (Server)", if I remove the line from kickstart, the installation program > > will report many file conflicts and installation can't proceed. > > Any idea? > > It's still documented as a _valid_ option in the RHEL7 Beta > Installation Guide, Section 16.5 [3], no less. Sounds like either a > documentation bug, or an Anaconda bug. > > Off-to-Bugzilla ... my search came up empty [5]. > > I'd file either a documentation or RHEL7 component "Anaconda" bug. > Yes, it should be fixed either in documentation or in anaconda. Thanks Romu > -- bjs > > References: > > [1] > https://access.redhat.com/site/products/Red_Hat_Enterprise_Linux/Get-Beta > > [2a] > https://www.redhat.com/archives/rhelv6-beta-list/2013-December/msg00004.html > [2z] > https://www.redhat.com/archives/rhelv6-beta-list/2013-December/msg00004.html > > [3] > https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html-single/Installation_Guide/index.html#s1-kickstart2-packageselection > > [4] http://www.redhat.com/mailman/listinfo/anaconda-devel-list > > [5] > https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&bug_status=MODIFIED&bug_status=ON_DEV&bug_status=ON_QA&bug_status=VERIFIED&bug_status=RELEASE_PENDING&bug_status=CLOSED&classification=Red%20Hat&component=anaconda&component=anaconda-help&component=anaconda-images&component=anaconda-product&component=anaconda-yum-plugins&product=Red%20Hat%20Enterprise%20Linux%207&query_format=advanced&short_desc=Conflicts&short_desc_type=allwordssubstr > > > -- > Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith > ----------------------------------------------------------------- > "In a way, Bortles is the personification of the UCF football > program. Each has many of the elements that everyone claims to > want, and yet they are nobody's first choice. Coming out of high > school, Bortles had the size and the arm to play at a more > prestigious program. UCF likewise has the market size and the > talent base to play in a more prestigious conference than the > American Athletic. But timing and circumstances conspired to put > both where they are now." -- Andy Staples, CNN-Sports Illustrated > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tde3000 at gmail.com Fri Feb 14 09:21:33 2014 From: tde3000 at gmail.com (John Stein) Date: Fri, 14 Feb 2014 11:21:33 +0200 Subject: [rhelv6-list] Configuring a loopback adapter In-Reply-To: References: Message-ID: Copy and paste problem. Device is actually up with these settings, just can't connect to the WSD. Thanks for replying. john On Feb 13, 2014 11:55 PM, "John Haxby" wrote: > On 12 February 2014 14:14, John Stein wrote: > >> I am trying to configure some rhel 6.2 servers to work with a WSD load >> balancer. >> >> What I have in each servers ifcfg-lo:1 file: >> >> Device=lo:1 >> Ipaddr=192.168.10.100 >> Netmask=255.255.255.255 >> Network=192.168.10.0 >> Broadcast=192.168.10.255 >> Onboot=yes >> >> I've also tried changing the arp_ignore parameters to 1 for the loopback >> adapter like I've found in the net. >> It's still not working. >> > > Is that a copy-and-paste problem or is it really Device, Ipaddr, etc > instead of DEVICE, IPADDR, etc? > > jch > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias at saou.eu Fri Feb 14 11:55:29 2014 From: matthias at saou.eu (Matthias Saou) Date: Fri, 14 Feb 2014 12:55:29 +0100 Subject: [rhelv6-list] RHEL7 NIC naming convention In-Reply-To: References: Message-ID: <20140214125529.265cdbe0@ultra.marmotte.ici> On Thu, 13 Feb 2014 08:32:53 +0100 Gianluca Cecchi wrote: > On Thu, Feb 13, 2014 at 8:08 AM, Romu wrote: > > RHEL7 uses different names for NICs, in RHEL4/5/6 it's always > > eth0/1/2/3..., but in RHEL7 I've seen em1/2, ens3... What is the > > rule here? How can I know whether the NIC name will be em or ens? > > > > It can be useful reading from here from which RHEL 7 inherits: > http://docs.fedoraproject.org/en-US/Fedora/18/html/System_Administrators_Guide/appe-Consistent_Network_Device_Naming.html ...or just reading about RHEL7 itself :-) https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Networking_Guide/Consistent_Network_Device_Naming.html The "rules" are all there (check the next few pages). Matthias From matthias at saou.eu Fri Feb 14 14:43:43 2014 From: matthias at saou.eu (Matthias Saou) Date: Fri, 14 Feb 2014 15:43:43 +0100 Subject: [rhelv6-list] Configuring a loopback adapter In-Reply-To: References: Message-ID: <20140214154343.37af29a8@r2d2.marmotte.net> On Fri, 14 Feb 2014 11:21:33 +0200 John Stein wrote: > Copy and paste problem. Device is actually up with these settings, > just can't connect to the WSD. To debug, run tcpdump on the interface(s) where the packets are supposed to arrive and also leave. Then start playing with the arp_ignore settings you mentioned, *and* the arp_announce and rp_filter ones as needed. You really need to be careful with all of them. For a typical LVS-DR setup, here's what I have on some RHEL6 web servers where packets arrive on the private eth1 and are expected to go out through eth0 where the default gateway resides (and with the LVS IP addresses aliased on lo like you did) : sysctl { 'net.ipv4.conf.all.arp_announce': value => '2' } sysctl { 'net.ipv4.conf.all.arp_ignore': value => '1' } sysctl { 'net.ipv4.conf.eth1.rp_filter': value => '0' } HTH, Matthias > On Feb 13, 2014 11:55 PM, "John Haxby" wrote: > > > On 12 February 2014 14:14, John Stein wrote: > > > >> I am trying to configure some rhel 6.2 servers to work with a WSD > >> load balancer. > >> > >> What I have in each servers ifcfg-lo:1 file: > >> > >> Device=lo:1 > >> Ipaddr=192.168.10.100 > >> Netmask=255.255.255.255 > >> Network=192.168.10.0 > >> Broadcast=192.168.10.255 > >> Onboot=yes > >> > >> I've also tried changing the arp_ignore parameters to 1 for the > >> loopback adapter like I've found in the net. > >> It's still not working. > >> > > > > Is that a copy-and-paste problem or is it really Device, Ipaddr, etc > > instead of DEVICE, IPADDR, etc? -- Matthias Saou ?? ?? ?? ?? Web: http://matthias.saou.eu/ ?????????????? Mail/XMPP: matthias at saou.eu ???? ?????? ???? ?????????????????????? GPG: 4096R/E755CC63 ?? ?????????????? ?? 8D91 7E2E F048 9C9C 46AF ?? ?? ?? ?? 21A9 7A51 7B82 E755 CC63 ???? ???? From tom.cartwright at bbc.co.uk Wed Feb 19 18:32:21 2014 From: tom.cartwright at bbc.co.uk (Tom Cartwright) Date: Wed, 19 Feb 2014 18:32:21 +0000 Subject: [rhelv6-list] FW: Java versions in RHEL/CentOS Message-ID: <3C8AE2220F928C4081EC229F4836BEC51F4E42A0@BGB01XUD1011.national.core.bbc.co.uk> Hi All, This is a follow-up from a mail to the Centos lists: http://lists.centos.org/pipermail/centos/2014-February/140925.html Following the latest security updates from Oracle, the version of OpenJDK package is currently listed as: java-1.7.0-openjdk-1.7.0.51-2.4.4.1.el6_5.x86_64.rpm The Redhat security advisory lists these packages: https://rhn.redhat.com/errata/RHSA-2014-0026.html but it makes no reference to the build number, which it turns out is important. The build on the package in centos 6.5 is currently listed as b02: [........]$ java -version java version "1.7.0_51" OpenJDK Runtime Environment (rhel-2.4.4.1.el6_5-x86_64 u51-b02) OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode) However changes were being made in at least b10: https://bugs.openjdk.java.net/browse/JDK-8028111 I guess this raises three questions: 1. How is the build of the JDK selected for a security update in RHEL/CentOS? 2. Could the b number be made more clear in the release information given its importance? 3. Is it possible to JDK package be updated to the latest build number, given the current one has missing backports? Thanks, Tom ----------------------------- http://www.bbc.co.uk This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system. Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this. ----------------------------- From linux at cmadams.net Mon Feb 24 15:01:49 2014 From: linux at cmadams.net (Chris Adams) Date: Mon, 24 Feb 2014 09:01:49 -0600 Subject: [rhelv6-list] Home-brew SAN for virtualization Message-ID: <20140224150149.GC7715@cmadams.net> I have taken over a set of Xen servers somebody else built, and am now rebuilding the CentOS-based storage (Dell MD3000 SAS storage shelf connected to a couple of CentOS servers), and could use some advice. The Xen servers are just "plain" Xen, with no clustering, and right now all the VM images are local to each server (a mix of file-based raw images and LVM logical volumes). We'd like to get the storage rebuilt to host the VM images on shared storage (initially to free up local space for only higher-I/O stuff like mail queues, and eventually to convert the whole thing to oVirt or some other clustered VM solution). The storage was previously set up running NFS to share out all the space, with some VMs running from raw file images over NFS. That seems somewhat inefficient to me, so I was considering setting up the rebuilt storage with iSCSI for the VM storage, but then how do I manage it? Do people create a new LUN for each VM? We have around 75 VMs right now. Also, I was also considering using LVM thin provisioning, but a quick test (under Fedora 20) seemed to show a high overhead for that; I/O throughput dropped by about 50% compared to a "normal" logical volume (created an LV+ext4, ran bonnie++, created an LV-pool+thin-LV+ext4, ran bonnie++ a couple of times). Is that expected? -- Chris Adams From herrold at owlriver.com Tue Feb 25 22:35:43 2014 From: herrold at owlriver.com (R P Herrold) Date: Tue, 25 Feb 2014 17:35:43 -0500 (EST) Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: <20140224150149.GC7715@cmadams.net> References: <20140224150149.GC7715@cmadams.net> Message-ID: On Mon, 24 Feb 2014, Chris Adams wrote: > I have taken over a set of Xen servers somebody else built, and am now > rebuilding the CentOS-based storage (Dell MD3000 SAS storage shelf > connected to a couple of CentOS servers), and could use some advice. > The Xen servers are just "plain" Xen, with no clustering, and right now > all the VM images are local to each server ... snip ... > The storage was previously set up running NFS to share out all the > space, with some VMs running from raw file images over NFS. That seems > somewhat inefficient to me, so I was considering setting up the rebuilt > storage with iSCSI for the VM storage, but then how do I manage it? Do > people create a new LUN for each VM? We have around 75 VMs right now. We run a mixture of both local FS , and also NFS mediated VMs. Several hundred live VM machines at any point in time. Heavy use of NFS and LVM2 in the storage fabric. We went to a custom build of xfs support on the NFS servers to get the reliability we needed The 'rap' [but not backed up by publishable testing] is that iScsi has inadequate throughput UNLESS one is using Infiniband, or a fiber link, or possibly 10Ge links. Rough testing with those three options has indicated that kernels build from the Red Hat SRPMs need additional tuning to get adequate throughput to get usable performance, and trellised links remaining usable. Link dropout is a real 'bear' of an issue, under non-static networking / udev 'nailled up' configurations I am working with a HPC client on this in a project atm, to put 'real numbers' on that 'rap' assertion of the prior paragraph With best regards, -- Russ herrold From b.j.smith at ieee.org Tue Feb 25 22:58:30 2014 From: b.j.smith at ieee.org (Bryan J Smith) Date: Tue, 25 Feb 2014 17:58:30 -0500 Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: References: <20140224150149.GC7715@cmadams.net> Message-ID: The oVirt managed KVM+libgfapi (Gluster API) combo is really the "killer app" here. You just setup your management node (oVirt) for the farm. Then adding computer and/or storage is just a matter of adding another KVM node (with Gluster Server/API and oVirt agents) to the farm. -- bjs -- Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith ----------------------------------------------------------------- "In a way, Bortles is the personification of the UCF football program. Each has many of the elements that everyone claims to want, and yet they are nobody's first choice. Coming out of high school, Bortles had the size and the arm to play at a more prestigious program. UCF likewise has the market size and the talent base to play in a more prestigious conference than the American Athletic. But timing and circumstances conspired to put both where they are now." -- Andy Staples, CNN-Sports Illustrated On Tue, Feb 25, 2014 at 5:35 PM, R P Herrold wrote: > On Mon, 24 Feb 2014, Chris Adams wrote: > >> I have taken over a set of Xen servers somebody else built, and am now >> rebuilding the CentOS-based storage (Dell MD3000 SAS storage shelf >> connected to a couple of CentOS servers), and could use some advice. > >> The Xen servers are just "plain" Xen, with no clustering, and right now >> all the VM images are local to each server > > ... snip ... > >> The storage was previously set up running NFS to share out all the >> space, with some VMs running from raw file images over NFS. That seems >> somewhat inefficient to me, so I was considering setting up the rebuilt >> storage with iSCSI for the VM storage, but then how do I manage it? Do >> people create a new LUN for each VM? We have around 75 VMs right now. > > We run a mixture of both local FS , and also NFS mediated VMs. > Several hundred live VM machines at any point in time. Heavy > use of NFS and LVM2 in the storage fabric. We went to a > custom build of xfs support on the NFS servers to get the > reliability we needed > > The 'rap' [but not backed up by publishable testing] is that > iScsi has inadequate throughput UNLESS one is using > Infiniband, or a fiber link, or possibly 10Ge links. Rough > testing with those three options has indicated that kernels > build from the Red Hat SRPMs need additional tuning to get > adequate throughput to get usable performance, and trellised > links remaining usable. Link dropout is a real 'bear' of an > issue, under non-static networking / udev 'nailled up' > configurations > > I am working with a HPC client on this in a project atm, to > put 'real numbers' on that 'rap' assertion of the prior > paragraph > > With best regards, > > -- Russ herrold > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list From b.j.smith at ieee.org Tue Feb 25 23:01:21 2014 From: b.j.smith at ieee.org (Bryan J Smith) Date: Tue, 25 Feb 2014 18:01:21 -0500 Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: References: <20140224150149.GC7715@cmadams.net> Message-ID: My apologies, I just re-read the OP. Is that SAS device multi-targetable? I.e., same, shared storage between multiple nodes? If so, then it may be directly usable by multiple KVM nodes simultaneously (managed by oVirt), no Gluster needed. -- Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith ----------------------------------------------------------------- "In a way, Bortles is the personification of the UCF football program. Each has many of the elements that everyone claims to want, and yet they are nobody's first choice. Coming out of high school, Bortles had the size and the arm to play at a more prestigious program. UCF likewise has the market size and the talent base to play in a more prestigious conference than the American Athletic. But timing and circumstances conspired to put both where they are now." -- Andy Staples, CNN-Sports Illustrated On Tue, Feb 25, 2014 at 5:58 PM, Bryan J Smith wrote: > The oVirt managed KVM+libgfapi (Gluster API) combo is really the > "killer app" here. > > You just setup your management node (oVirt) for the farm. > > Then adding computer and/or storage is just a matter of adding another > KVM node (with Gluster Server/API and oVirt agents) to the farm. > > -- bjs > > -- > Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith > ----------------------------------------------------------------- > "In a way, Bortles is the personification of the UCF football > program. Each has many of the elements that everyone claims to > want, and yet they are nobody's first choice. Coming out of high > school, Bortles had the size and the arm to play at a more > prestigious program. UCF likewise has the market size and the > talent base to play in a more prestigious conference than the > American Athletic. But timing and circumstances conspired to put > both where they are now." -- Andy Staples, CNN-Sports Illustrated > > > On Tue, Feb 25, 2014 at 5:35 PM, R P Herrold wrote: >> On Mon, 24 Feb 2014, Chris Adams wrote: >> >>> I have taken over a set of Xen servers somebody else built, and am now >>> rebuilding the CentOS-based storage (Dell MD3000 SAS storage shelf >>> connected to a couple of CentOS servers), and could use some advice. >> >>> The Xen servers are just "plain" Xen, with no clustering, and right now >>> all the VM images are local to each server >> >> ... snip ... >> >>> The storage was previously set up running NFS to share out all the >>> space, with some VMs running from raw file images over NFS. That seems >>> somewhat inefficient to me, so I was considering setting up the rebuilt >>> storage with iSCSI for the VM storage, but then how do I manage it? Do >>> people create a new LUN for each VM? We have around 75 VMs right now. >> >> We run a mixture of both local FS , and also NFS mediated VMs. >> Several hundred live VM machines at any point in time. Heavy >> use of NFS and LVM2 in the storage fabric. We went to a >> custom build of xfs support on the NFS servers to get the >> reliability we needed >> >> The 'rap' [but not backed up by publishable testing] is that >> iScsi has inadequate throughput UNLESS one is using >> Infiniband, or a fiber link, or possibly 10Ge links. Rough >> testing with those three options has indicated that kernels >> build from the Red Hat SRPMs need additional tuning to get >> adequate throughput to get usable performance, and trellised >> links remaining usable. Link dropout is a real 'bear' of an >> issue, under non-static networking / udev 'nailled up' >> configurations >> >> I am working with a HPC client on this in a project atm, to >> put 'real numbers' on that 'rap' assertion of the prior >> paragraph >> >> With best regards, >> >> -- Russ herrold >> >> _______________________________________________ >> rhelv6-list mailing list >> rhelv6-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhelv6-list From linux at cmadams.net Tue Feb 25 23:41:33 2014 From: linux at cmadams.net (Chris Adams) Date: Tue, 25 Feb 2014 17:41:33 -0600 Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: References: <20140224150149.GC7715@cmadams.net> Message-ID: <20140225234133.GA16755@cmadams.net> Once upon a time, R P Herrold said: > The 'rap' [but not backed up by publishable testing] is that > iScsi has inadequate throughput UNLESS one is using > Infiniband, or a fiber link, or possibly 10Ge links. Hmm, interesting to hear. It would seem that iSCSI would be higher throughput than NFS, but maybe the many extra years of people pounding on Linux NFS (compared to the relatively yound SCSI target code) has made it better. -- Chris Adams From linux at cmadams.net Tue Feb 25 23:42:36 2014 From: linux at cmadams.net (Chris Adams) Date: Tue, 25 Feb 2014 17:42:36 -0600 Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: References: <20140224150149.GC7715@cmadams.net> Message-ID: <20140225234236.GB16600@cmadams.net> Once upon a time, Bryan J Smith said: > Is that SAS device multi-targetable? I.e., same, shared storage > between multiple nodes? If so, then it may be directly usable by > multiple KVM nodes simultaneously (managed by oVirt), no Gluster > needed. It has two controllers with two SAS ports each; it is set up where each storage server has a SAS connection to each MD3000 controller (multipath for redundancy). While the long-term goal is oVirt (or maybe RHEV, but the budget is tight for this setup), and possibly Gluster, unfortunately that isn't an option at the moment. I have the stack of Xen servers with full hard drives and no centralized management, two storage servers, and the SAS shelf, and I need more disk ASAP (might have a couple of weeks to get it running but that's about it). Now, getting something up and running to handle today's needs, with a path to oVirt/Gluster later, would be ideal. I haven't actually used Gluster myself though (just read some about it in the past). In this kind of setup, what would the storage look like locally (on the directly-attached servers)? Would I be able to create a regular LVM setup, with an FS (ext4/xfs) to share out some space via NFS today, and then later use the rest of the space for Gluster? -- Chris Adams From b.j.smith at ieee.org Wed Feb 26 03:57:28 2014 From: b.j.smith at ieee.org (Bryan J Smith) Date: Tue, 25 Feb 2014 22:57:28 -0500 Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: <20140225234236.GB16600@cmadams.net> References: <20140224150149.GC7715@cmadams.net> <20140225234236.GB16600@cmadams.net> Message-ID: Chris Adams wrote: > It has two controllers with two SAS ports each; it is set up where each > storage server has a SAS connection to each MD3000 controller (multipath > for redundancy). So it's possibly for the two servers to share the same storage, correct? If so, you don't need Clustering. The oVirt stack handles it for you. LVs are enabled, disabled, snapshot, etc... for VMs, and only enabled on one host -- i.e., where it is running -- at a time. That's how it works, and it works very well -- as long, of course -- you can directly attached every "compute" (HyperVisor) node to it. For those without hardware-based shared storage, Gluster is ideal as well. Gluster does handle file-based locking for true concurrency. But in the case of QEMU-KVM leveraging the libgfapi (Gluster API) for direct block access by the VM, all managed by the oVirt stack's POSIX file system module, locking is really not even a need. Let alone it's cake to manage now that it's been integrated into oVirt's GUI and agents (e.g., VDSM et al.). Fully replicated volumes offer the best, local read-heavy performance, although distributed-replicated also works well, depending on your distribution and usage. It's much more reliable than using a lot of other software-based storage solutions, especially those without something like the QEMU-KVM layer. I've seen a lot of people hack various things, just because they refuse to try Gluster ... when it's not just designed _exactly_ for this scenario, but Red Hat has always had this in mind for Gluster to solve a lot of the software-based storage issues over the years. > While the long-term goal is oVirt (or maybe RHEV, but the budget is > tight for this setup), The oVirt stack is the upstream that contains the GUI manager for both Red Hat Enterprise Virtualization Manager (RHEV-M) Red Hat Storage Console (RHS-C). The oVirt agents on each node merely talk to the stack. So there is _nothing_ stopping you from doing this using Upstream code. Although commercial RHEV has the nice option of pushing down pre-built images for the "HyperVisor". We'll see if a downstream project does similar in the future. E.g., what comes of the Red Hat-CentOS partnership will be interesting. I.e., the CentOS team had not been building most of the RHEL add-ons like RHEV, RHSS, et al. in the past. But one doesn't have to use pre-built images, and can manage the components on any RHEL platform, with the oVirt agents configured for the manager. In fact, debugging can sometimes be better if you have a full platform underneath. > and possibly Gluster, Software-defined storage is really for when you have no hardware multi-targeting. I'm biased towards Gluster, but in the oVirt stack, it becomes a no-brainer with heavy Red Hat focus end-to-end -- management to platform to agent. One _can_ run software-defined storage atop of hardware multi-targeting, but it's almost redundant in many ways. You might as well just use the hardware directly, especially with a solution like the oVirt stack. > unfortunately that isn't an option at the moment. I'm just pointing it out. I see people building "houses of cards" with NFS exports, various iSCSI targets, etc... all over the place, along with other, software-defined storage solutions, let alone not using their hardware effectively, when it comes to VM farms. And it's unmanaged, when it doesn't need to be, especially with something like oVirt out there. The oVirt stack itself, beyond just the libVirt and other foundations, was designed to manage not just KVM, but Xen and ESXi from inception. Unfortunately, they became stagnant without support from other vendors for a variety of reasons. I.e., an open source framework with full GUI stack could be seen as a "competitor" to existing, commercial frameworks and GUIs. So oVirt today is heavily KVM-only, but not by design. > I have the stack of Xen servers with full hard drives And that's exactly what I was pointing out, for "future consideration." ;) Storage with full hard drives is an _ideal_ application for directly presented, QEMU-level block storage to VMs. E.g., libgfapi (Gluster API) support on your end "compute" (HyperVisor) nodes so the farm can use _any_ Gluster volumes _anywhere_ in your Global Namespace. I.e., _all_ of your nodes with _any_ storage. ;) > and no centralized management, In this day'n age, a lot of sysadmins almost refuse to work without such. Beyond just libvirt and other foundations, oVirt was Red Hat's project to provide a true, cross-vendor, open source framework and GUI for KVM, Xen and ESXi. It's succeeded brilliantly, and is even used for Gluster management now, even if the other vendors -- at least the HyperVisor ones -- have not supported it. But that aside, you do _not_ need to use oVirt to get the underlying platform "capability." The oVirt agents for HyperVisor and Storage (Gluster) management are separate from the "capability." > two storage servers, While "storage" nodes can certainly be segmented from "compute" nodes, and that's even the requirement in the commercial Gluster flavor right now (Red Hat Storage Server 2.x -- it's more of a channel-distribution difference than anything), there's nothing stopping anyone in the upstream from using both, together. It's one thing if you can present direct storage to every HyperVisor. In that solution, you don't have to re-export anything. Every single node has direct access to the shared storage blocks. It's an ideal situation for a VM farm. Take advantage of that direct, hardware path. But once you find yourself in the segmented "storage" v. "compute" solution, software-defined storage starts looking better and better. Your VM instances write every single block to every Replica Brick in the Volume across your entire Storage Pool. Given that VMs are usually heavy read, lighter write -- or I/O quickly becomes an issue for the farm in general, regardless of solution -- it's an ideal solution. Again ... Any time you need any more "compute" _or_ "storage," you just "add another node" and get _both_. ;) > and the SAS shelf, and I need more disk ASAP (might have a couple of > weeks to get it running but that's about it). No reason you couldn't do this tomorrow with Gluster. It's designed to solve your problem, almost specifically. You're going to run into the case where you're going to be out of shelf-space, and looking at a lot of costly procurement beyond JBOD in your "compute" (HyperVisor) nodes. Once you go Gluster in applications like this, you never want to go back to NFS or other solutions. Even NFS, iSCSI, etc... exported "storage" nodes lose their benefits at some point. Now I've never tried Xen with the direct presented, QEMU-level block storage to VMs via the libgfapi support. There may be other details at work, although ultimately, it is QEMU in control of the block storage. I'd Google to see if anyone else is doing it. > Now, getting something up and running to handle today's needs, with a > path to oVirt/Gluster later, would be ideal. I haven't actually used > Gluster myself though (just read some about it in the past). Whether it's Gluster or something else, if "cost" is a consideration, you _will_ be looking at software-defined storage at some point. In the 100% open source VM farm world ... there's really only one, end-to-end option that is heavily developed and stablized to Red Hat's standards. And if you're using a Red Hat, or downstream, distribution, that basically means -- "designed for it with 5-10+ years in mind" v. "have to hack other things in, and might not be sustainable." ;) So that brings us back to RHEL KVM + Gluster libgfapi, whether you use oVirt management, agents, etc... or not with it. > In this kind of setup, what would the storage look like locally (on the > directly-attached servers)? Doesn't matter. You can use whatever you want. You can carve up hard drives across any nodes into Bricks and assign them in Replicated or Distributed-Replicated Volumes in the same, Trusted Storage Pool. That's the power of Gluster's Global Namespace. Doesn't matter if Bricks are actually on a system -- let alone whether a system has any storage at all! (could be a USB key) -- it looks like every node in the Trusted Storage Pool has _all_ of the volumes, transparently. Now if you do more careful planning, you can ensure select oVirt Compute Clusters have nodes with local Bricks that are in a Replicated Volume across all nodes. I.e., when I/O performance is critical, make a Compute Cluster that contains nodes with local Bricks where every brick is a Replica Brick in a Replicated Volume (or a local Distribute and Replica brick on every node in a Distribute-Replicated Volume), so every node has a copy of every VM. But that's only when performance is critical, and it will allow you to best a lot of iSCSI solutions in performance (let alone NFS). Remember, you could have more than one Compute Cluster -- one of "adhoc" nodes and one for "performance." The VMs that need to be high performing are on the latter. Various, lower priority VMs can be on the former and take advantage of every piece of hardware -- both compute and storage -- you have, with local storage of varying sizes, or not at all. > Would I be able to create a regular LVM setup, Yes. Absolutely. You can carve your storage as you see fit. > with an FS (ext4/xfs) to share out some space via NFS today, > and then later use the rest of the space for Gluster? Yes. Absolutely. You can carve your storage as you see fit. You just format one or more LVs with XFS (512B inode size highly recommended), and it is a Brick ready to be used in a new or existing Volume. Once a Volume is created+started, if not already existing, the storage is available to use. Although in the case of existing, and expanded, a rebalance is highly recommend if Distribute bricks are involved. I.e., Distributed or Distributed-Replicated Volumes, but not pure Replicated Volumes. Once you start using Gluster, it will really surprise you. It is designed to be direct access, no meta-data server, no downtime when you expand and other things. The caveats to Gluster, which are not negatives, but by design -- always up, always fast, always direct -- so there is no meta-data server bottleneck, there is no downtime or "pseudo up, but really degraded" modes, is that it uses a hash algorithm. E.g., if you expand a Distributed or Distributed-Replicated Volume, after adding another Distribute Brick (for more net storage), you need to Rebalance the Bricks. GlusterFS is its own file system. It's file-based storage its its advantage, especially when it comes to locking coherency, especially multi-protocol, multi-node. That's something not easily done with even Red Hat Cluster Suite and hardware-based shared storage. Although if you are using a GlusterFS volume as a QEMU-libgfapi block storage for a VM farm, I highly recommend you do not access that Volume from other nodes with other mounts (Native, NFS, other API calls, etc...). And this is really a case where you want to be using the oVirt stack, its agents (e.g., VDSM), etc... to manage that volume. -- bjs P.S. It's a bit dated, but still covers most of the specifics of the newer capability, and its resulting performance. - http://www.gluster.org/community/documentation/images/9/9d/QEMU_GlusterFS.pdf -- Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith ----------------------------------------------------------------- "In a way, Bortles is the personification of the UCF football program. Each has many of the elements that everyone claims to want, and yet they are nobody's first choice. Coming out of high school, Bortles had the size and the arm to play at a more prestigious program. UCF likewise has the market size and the talent base to play in a more prestigious conference than the American Athletic. But timing and circumstances conspired to put both where they are now." -- Andy Staples, CNN-Sports Illustrated From b.j.smith at ieee.org Wed Feb 26 04:10:39 2014 From: b.j.smith at ieee.org (Bryan J Smith) Date: Tue, 25 Feb 2014 23:10:39 -0500 Subject: [rhelv6-list] Home-brew SAN for virtualization In-Reply-To: <20140225234133.GA16755@cmadams.net> References: <20140224150149.GC7715@cmadams.net> <20140225234133.GA16755@cmadams.net> Message-ID: Chris Adams wrote: > Hmm, interesting to hear. It would seem that iSCSI would be higher > throughput than NFS, but maybe the many extra years of people pounding > on Linux NFS (compared to the relatively yound SCSI target code) has > made it better. The two are completely _non-comparable_. iSCSI, like FC, is a block service, not directly usable by an OS. NFS is a file service, with an optional lock manager, usable by an OS. "Raw" block services are _not_ directly usable as a file system, and have _no_ coherency (locking) capabilities from an OS usable standpoint. I.e., one must put another, often end-device, OS set of layers atop. At a minimum, this is to carve it out with volume management (e.g., LVM). And then, usually one of the follow to make it "usable" ... - File system (Ext4, XFS, etc...) + File service (NFS) for export - Cluster file system (GFS2) + Cluster Services (related) for mount - Etc... However ... since VMs _prefer_ blocks, and most VMs only execute on one node at a time, LVM Volume Groups can be created atop of "raw" block storage. The key is to ensure a "farm" does not have more than one HyperVisor node make any one Logical Volume active at a time. That's oVirt and agents (e.g., VDSM) right there. ;) GlusterFS is different than both approaches. GlusterFS itself is a shared file system service with many protocols built-in -- not just Native, but concurrent NFSv3 (pNFS 4.1 in the future), an API, etc... The API is how Samba, QEMU-libgfapi, etc... work to directly access the At its base form, Gluster is a managed, local file system -- e.g., XFS -- that stores files directly, but presents them either "replicated" or "distributed" across bricks. But instead of using the local file system locking and OS services with their locking, it has its own, powerful locking xlator (translator), and support xlators to emulate everything else. E.g., NLM4 for NFS3 POSIX locking. API exposes the POSIX locking as well. At the heart of Gluster is actually a single service ... glusterfsd, for which one is spawned for every brick created. It is where the xlators not only lie, but talk to each other -- all Bricks in a Volume -- to ensure locking coherency. It actually works extremely well, especially for load-balanced NFSv3 (from multiple servers) of a Gluster Volume, in addition to ensuring coherency with native mounts as well. The big difference between NFS and Native is NFS uses an export server, whereas Native is always direct to where the Brick is. In both cases, you don't even have to know which server(s) physically have the brick(s) in the Volume. -- bjs From mirko.vukovic at gmail.com Fri Feb 28 18:04:35 2014 From: mirko.vukovic at gmail.com (Mirko Vukovic) Date: Fri, 28 Feb 2014 13:04:35 -0500 Subject: [rhelv6-list] cannot update glibc Message-ID: yum update glibc gives me an error (see trace in attached file) My /etc/yum.repos.d: -rw-r--r--. 1 root root 2150 Feb 9 18:27 elrepo.repo -rw-r--r--. 1 root root 957 Nov 4 2012 epel.repo -rw-r--r--. 1 root root 1056 Nov 4 2012 epel-testing.repo -rw-r--r--. 1 root root 78 Aug 8 2012 redhat.repo -rw-r--r--. 1 root root 529 Nov 4 09:03 rhel-source.repo I disabled the repos (by setting enbled to 0) but that did not change anything. Thanks, Mirko -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: yum-update-glibc.log Type: text/x-log Size: 4810 bytes Desc: not available URL: From hugh-brown at uiowa.edu Fri Feb 28 19:30:26 2014 From: hugh-brown at uiowa.edu (Brown, Hugh M) Date: Fri, 28 Feb 2014 19:30:26 +0000 Subject: [rhelv6-list] cannot update glibc In-Reply-To: References: Message-ID: <4CBFC57AEB90DE4A809D02F51B6B2212022FB4DD@itsnt426.iowa.uiowa.edu> From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Mirko Vukovic Sent: Friday, February 28, 2014 12:05 PM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: [rhelv6-list] cannot update glibc yum update glibc gives me an error (see trace in attached file) My? /etc/yum.repos.d: -rw-r--r--.?? 1 root root? 2150 Feb? 9 18:27 elrepo.repo -rw-r--r--.?? 1 root root?? 957 Nov? 4? 2012 epel.repo -rw-r--r--.?? 1 root root? 1056 Nov? 4? 2012 epel-testing.repo -rw-r--r--.?? 1 root root??? 78 Aug? 8? 2012 redhat.repo -rw-r--r--.?? 1 root root?? 529 Nov? 4 09:03 rhel-source.repo I disabled the repos (by setting enbled to 0) but that did not change anything. Thanks, Mirko At first glance, the system probably doesn't have the workstation-optional-6 channel enabled to allow the glibc-static package to update. If it does, then the packages aren't syncing appropriately to your satellite. Hugh From mirko.vukovic at gmail.com Fri Feb 28 20:31:04 2014 From: mirko.vukovic at gmail.com (Mirko Vukovic) Date: Fri, 28 Feb 2014 15:31:04 -0500 Subject: [rhelv6-list] cannot update glibc In-Reply-To: <4CBFC57AEB90DE4A809D02F51B6B2212022FB4DD@itsnt426.iowa.uiowa.edu> References: <4CBFC57AEB90DE4A809D02F51B6B2212022FB4DD@itsnt426.iowa.uiowa.edu> Message-ID: On Fri, Feb 28, 2014 at 2:30 PM, Brown, Hugh M wrote: > > > From: rhelv6-list-bounces at redhat.com [mailto: > rhelv6-list-bounces at redhat.com] On Behalf Of Mirko Vukovic > Sent: Friday, February 28, 2014 12:05 PM > To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list > Subject: [rhelv6-list] cannot update glibc > > yum update glibc gives me an error (see trace in attached file) > My /etc/yum.repos.d: > -rw-r--r--. 1 root root 2150 Feb 9 18:27 elrepo.repo > -rw-r--r--. 1 root root 957 Nov 4 2012 epel.repo > -rw-r--r--. 1 root root 1056 Nov 4 2012 epel-testing.repo > -rw-r--r--. 1 root root 78 Aug 8 2012 redhat.repo > -rw-r--r--. 1 root root 529 Nov 4 09:03 rhel-source.repo > I disabled the repos (by setting enbled to 0) but that did not change > anything. > > Thanks, > Mirko > > > At first glance, the system probably doesn't have the > workstation-optional-6 channel enabled to allow the glibc-static package to > update. If it does, then the packages aren't syncing appropriately to your > satellite. > Hugh > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > Thanks Hugh, adding this channel did the trick. Mirko -------------- next part -------------- An HTML attachment was scrubbed... URL: