From linux at alteeve.com Tue Jan 3 15:09:05 2012 From: linux at alteeve.com (Digimer) Date: Tue, 03 Jan 2012 10:09:05 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 Message-ID: <4F031A11.606@alteeve.com> Hi all, I'm happy to announce a new tutorial! https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial This tutorial walks a user through the entire process of building a 2-Node cluster for making KVM virtual machines highly available. It uses Red Hat Cluster services v3 and DRBD 8.3.12. It is written such that you can use entirely free or fully Red Hat supported environments. Highlights; * Full network and power redundancy; no single-points of failure. * All off-the-shelf hardware; Storage via DRBD. * Starts with base OS install, no clustering experience required. * All software components explained. * Includes all testing steps covered. * Configuration is used in production environments! This tutorial is totally free (no ads, no registration) and released under the Creative Common 3.0 Share-Alike Non-Commercial license. Feedback is always appreciated! -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From imusayev at webmd.net Wed Jan 4 13:29:00 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Wed, 4 Jan 2012 08:29:00 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: <4F031A11.606@alteeve.com> References: <4F031A11.606@alteeve.com> Message-ID: Digimer, Though I've been using vmware for almost a decade, I've been somewhat neglecting KVM. I will certainly give your tutorial a try and comment as needed. I should have some time to try it in 2 weeks or so. Thanks for dedicating your time and sharing it with the world, Regards Ilya -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Digimer Sent: Tuesday, January 03, 2012 10:09 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 Hi all, I'm happy to announce a new tutorial! https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial This tutorial walks a user through the entire process of building a 2-Node cluster for making KVM virtual machines highly available. It uses Red Hat Cluster services v3 and DRBD 8.3.12. It is written such that you can use entirely free or fully Red Hat supported environments. Highlights; * Full network and power redundancy; no single-points of failure. * All off-the-shelf hardware; Storage via DRBD. * Starts with base OS install, no clustering experience required. * All software components explained. * Includes all testing steps covered. * Configuration is used in production environments! This tutorial is totally free (no ads, no registration) and released under the Creative Common 3.0 Share-Alike Non-Commercial license. Feedback is always appreciated! -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From linux at alteeve.com Wed Jan 4 13:37:23 2012 From: linux at alteeve.com (Digimer) Date: Wed, 04 Jan 2012 08:37:23 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: References: <4F031A11.606@alteeve.com> Message-ID: <4F045613.7080108@alteeve.com> On 01/04/2012 08:29 AM, Musayev, Ilya wrote: > Digimer, > > Though I've been using vmware for almost a decade, I've been somewhat neglecting KVM. I will certainly give your tutorial a try and comment as needed. > > I should have some time to try it in 2 weeks or so. Thanks for dedicating your time and sharing it with the world, > > Regards > Ilya Awesome! Please let me know how you find it compares. :) -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From imusayev at webmd.net Wed Jan 4 14:15:00 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Wed, 4 Jan 2012 09:15:00 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: <4F045613.7080108@alteeve.com> References: <4F031A11.606@alteeve.com> <4F045613.7080108@alteeve.com> Message-ID: One other question?, Can I try this out in already virtual infrastructure? I'm a bit shorthanded on spare physical hardware at the moment, but have plenty of resources in VMWare VSphere clusters. Im not expecting superb performance, only proof of concept. Any thoughts? -----Original Message----- From: Digimer [mailto:linux at alteeve.com] Sent: Wednesday, January 04, 2012 8:37 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Cc: Musayev, Ilya Subject: Re: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 On 01/04/2012 08:29 AM, Musayev, Ilya wrote: > Digimer, > > Though I've been using vmware for almost a decade, I've been somewhat neglecting KVM. I will certainly give your tutorial a try and comment as needed. > > I should have some time to try it in 2 weeks or so. Thanks for dedicating your time and sharing it with the world, > > Regards > Ilya Awesome! Please let me know how you find it compares. :) -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From linux at alteeve.com Wed Jan 4 14:44:41 2012 From: linux at alteeve.com (Digimer) Date: Wed, 04 Jan 2012 09:44:41 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: References: <4F031A11.606@alteeve.com> <4F045613.7080108@alteeve.com> Message-ID: <4F0465D9.9070608@alteeve.com> On 01/04/2012 09:15 AM, Musayev, Ilya wrote: > One other question?, > Can I try this out in already virtual infrastructure? I'm a bit shorthanded on spare physical hardware at the moment, but have plenty of resources in VMWare VSphere clusters. > > Im not expecting superb performance, only proof of concept. Any thoughts? No, I don't believe you can. Rather, not the KVM portion. You can certainly do the rest of the clustering part, but running KVM in VMWare (or any other VM) doesn't work, from my limited testing. Though to be fair, my vm-in-vm test was kvm -> vmware, so I don't want to sound the expert. If it helps at all; I included the hardware I used to write the tutorial specifically to provide a fairly inexpensive test bed for people who don't have servers laying around. The parts should be less than $1k (canadian) per node. You don't need a layer 2 switch for learning/testing, so two dumb 8-port switches will suffice. If you use the mainboards I mention, then you will have IPMI for fencing so, again for testing, you can forgo the PDU. hth :) -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From rprice at redhat.com Wed Jan 4 15:20:15 2012 From: rprice at redhat.com (Robin Price II) Date: Wed, 04 Jan 2012 10:20:15 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: <4F031A11.606@alteeve.com> References: <4F031A11.606@alteeve.com> Message-ID: <4F046E2F.8030809@redhat.com> Digimer, Wow. You put a lot of time into this. Well done! ~rp On 01/03/2012 10:09 AM, Digimer wrote: > Hi all, > > I'm happy to announce a new tutorial! > > https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial > > This tutorial walks a user through the entire process of building a > 2-Node cluster for making KVM virtual machines highly available. It uses > Red Hat Cluster services v3 and DRBD 8.3.12. It is written such that you > can use entirely free or fully Red Hat supported environments. > > Highlights; > * Full network and power redundancy; no single-points of failure. > * All off-the-shelf hardware; Storage via DRBD. > * Starts with base OS install, no clustering experience required. > * All software components explained. > * Includes all testing steps covered. > * Configuration is used in production environments! > > This tutorial is totally free (no ads, no registration) and released > under the Creative Common 3.0 Share-Alike Non-Commercial license. > Feedback is always appreciated! > -- +-----------------------------[ robin at redhat.com ]----+ | Robin Price II - RHCE,RHCDS,RHCVA | | Technical Account Manager | | Red Hat, Inc. | | w: +1 (919) 754 4412 | | c: +1 (252) 474 3525 | | | +---------[ Dissenters will inevitably abhor. ]-------+ From phil at elrepo.org Wed Jan 4 15:36:36 2012 From: phil at elrepo.org (Phil Perry) Date: Wed, 04 Jan 2012 15:36:36 +0000 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: <4F046E2F.8030809@redhat.com> References: <4F031A11.606@alteeve.com> <4F046E2F.8030809@redhat.com> Message-ID: <4F047204.7070502@elrepo.org> On 04/01/12 15:20, Robin Price II wrote: > Digimer, > > Wow. You put a lot of time into this. Well done! > > ~rp > +1 Really nice job :-) From linux at alteeve.com Wed Jan 4 15:47:04 2012 From: linux at alteeve.com (Digimer) Date: Wed, 04 Jan 2012 10:47:04 -0500 Subject: [rhelv6-list] New Tutorial - RHCS + DRBD + KVM; 2-Node HA on EL6 In-Reply-To: <4F047204.7070502@elrepo.org> References: <4F031A11.606@alteeve.com> <4F046E2F.8030809@redhat.com> <4F047204.7070502@elrepo.org> Message-ID: <4F047478.807@alteeve.com> On 01/04/2012 10:36 AM, Phil Perry wrote: > On 04/01/12 15:20, Robin Price II wrote: >> Digimer, >> >> Wow. You put a lot of time into this. Well done! >> >> ~rp >> > > +1 > > Really nice job :-) Thanks! I learned a *lot* writing it, so it was a win for me, too. :) -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From amyagi at gmail.com Fri Jan 6 16:18:32 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Fri, 6 Jan 2012 08:18:32 -0800 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days Message-ID: Hi, There is a kernel bug that causes a system crash when the uptime goes over 208.5 days. According to available info, the patch [1] is now in kernel 3.1.5. RHEL 6 is apparently affected (timer.h has the buggy code). I was not able to find a bug report at bugzilla.redhat.com that is seemingly related to this bug but my search is not extensive. Is there anyone running the RHEL-6 system long enough to see this bug? Akemi [1] http://www.google.com/url?sa=D&q=http://git.kernel.org/%3Fp%3Dlinux/kernel/git/tip/tip.git%3Ba%3Dcommitdiff%3Bh%3D4cecf6d401a01d054afc1e5f605bcbfe553cb9b9&usg=AFQjCNFaSQTxnRMEckhXsYkZlRloks3WJg From lfarkas at lfarkas.org Fri Jan 6 16:42:16 2012 From: lfarkas at lfarkas.org (Farkas Levente) Date: Fri, 06 Jan 2012 17:42:16 +0100 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: Message-ID: <4F072468.3050308@lfarkas.org> On 01/06/2012 05:18 PM, Akemi Yagi wrote: > Hi, > > There is a kernel bug that causes a system crash when the uptime goes > over 208.5 days. According to available info, the patch [1] is now in > kernel 3.1.5. RHEL 6 is apparently affected (timer.h has the buggy > code). I was not able to find a bug report at bugzilla.redhat.com that > is seemingly related to this bug but my search is not extensive. > > Is there anyone running the RHEL-6 system long enough to see this bug? 2.6.32-71.24.1.el6.x86_64 # uptime 17:40:32 up 219 days, 20:29, 1 user, load average: 1.36, 0.57, 0.46 what's the problem? -- Levente "Si vis pacem para bellum!" From rprice at redhat.com Fri Jan 6 16:55:08 2012 From: rprice at redhat.com (Robin Price II) Date: Fri, 06 Jan 2012 11:55:08 -0500 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: Message-ID: <4F07276C.1010501@redhat.com> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=765720 This is private due to private information from customer use cases. If you need further details, I would highly encourage you to contact Red Hat support or your TAM. Here is the initial information opened in the BZ: "The following patch is in urgent fix for Linus branch, which avoid the unnecessary overflow in sched_clock otherwise kernel will crash after 209~250 days. http://git.kernel.org/?p=linux/kernel/git/tip/tip.git;a=patch;h=4cecf6d401a01d054afc1e5f605bcbfe553cb9b9 In hundreds of days, the __cycles_2_ns calculation in sched_clock has an overflow. cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing the final value to become zero. We can solve this without losing any precision. We can decompose TSC into quotient and remainder of division by the scale factor, and then use this to convert TSC into nanoseconds." ~rp On 01/06/2012 11:18 AM, Akemi Yagi wrote: > Hi, > > There is a kernel bug that causes a system crash when the uptime goes > over 208.5 days. According to available info, the patch [1] is now in > kernel 3.1.5. RHEL 6 is apparently affected (timer.h has the buggy > code). I was not able to find a bug report at bugzilla.redhat.com that > is seemingly related to this bug but my search is not extensive. > > Is there anyone running the RHEL-6 system long enough to see this bug? > > Akemi > > [1] http://www.google.com/url?sa=D&q=http://git.kernel.org/%3Fp%3Dlinux/kernel/git/tip/tip.git%3Ba%3Dcommitdiff%3Bh%3D4cecf6d401a01d054afc1e5f605bcbfe553cb9b9&usg=AFQjCNFaSQTxnRMEckhXsYkZlRloks3WJg > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list -- +-----------------------------[ robin at redhat.com ]----+ | Robin Price II - RHCE,RHCDS,RHCVA | | Solutions Architect | | Red Hat, Inc. | | w: +1 (919) 754 4412 | | c: +1 (252) 474 3525 | | | +---------[ http://people.redhat.com/rprice ]---------+ From marco.shaw at gmail.com Fri Jan 6 16:53:11 2012 From: marco.shaw at gmail.com (Marco Shaw) Date: Fri, 6 Jan 2012 12:53:11 -0400 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: <4F072468.3050308@lfarkas.org> References: <4F072468.3050308@lfarkas.org> Message-ID: "...in kernel 3.1.5..."? > 2.6.32-71.24.1.el6.x86_64 > # uptime > ?17:40:32 up 219 days, 20:29, ?1 user, ?load average: 1.36, 0.57, 0.46 > > what's the problem? From amyagi at gmail.com Fri Jan 6 16:54:43 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Fri, 6 Jan 2012 08:54:43 -0800 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: <4F072468.3050308@lfarkas.org> References: <4F072468.3050308@lfarkas.org> Message-ID: On Fri, Jan 6, 2012 at 8:42 AM, Farkas Levente wrote: > On 01/06/2012 05:18 PM, Akemi Yagi wrote: >> Hi, >> >> There is a kernel bug that causes a system crash when the uptime goes >> over 208.5 days. According to available info, the patch [1] is now in >> kernel 3.1.5. RHEL 6 is apparently affected (timer.h has the buggy >> code). I was not able to find a bug report at bugzilla.redhat.com that >> is seemingly related to this bug but my search is not extensive. >> >> Is there anyone running the RHEL-6 system long enough to see this bug? > > 2.6.32-71.24.1.el6.x86_64 > # uptime > ?17:40:32 up 219 days, 20:29, ?1 user, ?load average: 1.36, 0.57, 0.46 > > what's the problem? According to the info I have seen, not all systems will see the bug. For example, due to the nature of the bug, virtual machines are not affected. I also read that only the Intel CPU (Pentium4 or newer) is relevant. By the way why 208.5 days? 2^54 / ( 24 * 60 * 60 * 1000 * 1000 * 1000 ) = 208.499983 days Akemi From amyagi at gmail.com Fri Jan 6 16:55:57 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Fri, 6 Jan 2012 08:55:57 -0800 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: <4F07276C.1010501@redhat.com> References: <4F07276C.1010501@redhat.com> Message-ID: On Fri, Jan 6, 2012 at 8:55 AM, Robin Price II wrote: > Bugzilla: ?https://bugzilla.redhat.com/show_bug.cgi?id=765720 > > This is private due to private information from customer use cases. ?If you > need further details, I would highly encourage you to contact Red Hat > support or your TAM. > > Here is the initial information opened in the BZ: > > "The following patch is in urgent fix for Linus branch, which avoid the > unnecessary overflow in sched_clock otherwise kernel will crash after > 209~250 days. > > http://git.kernel.org/?p=linux/kernel/git/tip/tip.git;a=patch;h=4cecf6d401a01d054afc1e5f605bcbfe553cb9b9 > > In hundreds of days, the __cycles_2_ns calculation in sched_clock > has an overflow. ?cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing the > final value to become zero. ?We can solve this without losing any precision. > We can decompose TSC into quotient and remainder of division by the scale > factor, and then use this to convert TSC into nanoseconds." > > ~rp Thank you for this post to let us know that Red Hat is now taking care of this issue. Akemi From amyagi at gmail.com Fri Jan 6 16:59:01 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Fri, 6 Jan 2012 08:59:01 -0800 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: <4F072468.3050308@lfarkas.org> Message-ID: On Fri, Jan 6, 2012 at 8:53 AM, Marco Shaw wrote: > "...in kernel 3.1.5..."? > >> 2.6.32-71.24.1.el6.x86_64 >> # uptime >> ?17:40:32 up 219 days, 20:29, ?1 user, ?load average: 1.36, 0.57, 0.46 >> >> what's the problem? kernel 2.6.32 (which RHEL 6 is based upon) is affected. In the upstream kernel (kernel.org), looks like the patch is in: 2.6: 2.6.32.50 3.0: 3.0.13 3.1: 3.1.5 Akemi From marco.shaw at gmail.com Fri Jan 6 17:03:56 2012 From: marco.shaw at gmail.com (Marco Shaw) Date: Fri, 6 Jan 2012 13:03:56 -0400 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: <4F072468.3050308@lfarkas.org> Message-ID: Oh, I misread, and sort of forgot about how RHEL handles updates... ;-) On Fri, Jan 6, 2012 at 12:53 PM, Marco Shaw wrote: > "...in kernel 3.1.5..."? > >> 2.6.32-71.24.1.el6.x86_64 >> # uptime >> ?17:40:32 up 219 days, 20:29, ?1 user, ?load average: 1.36, 0.57, 0.46 >> >> what's the problem? -- *Microsoft MVP - Windows PowerShell https://mvp.support.microsoft.com/profile/Marco.Shaw *Co-Author - Sams Windows PowerShell Unleashed 2nd Edition *Blog - http://marcoshaw.blogspot.com From rprice at redhat.com Fri Jan 6 20:29:50 2012 From: rprice at redhat.com (Robin Price II) Date: Fri, 06 Jan 2012 15:29:50 -0500 Subject: [rhelv6-list] KVM issues post RHEL6-1->6.2 update In-Reply-To: References: Message-ID: <4F0759BE.9080904@redhat.com> Ben and rhelv6-list, Updated KVM host from RHEL 6.1 to RHEL 6.2, migrated guests fail to start https://access.redhat.com/kb/docs/DOC-68326 ~rp On 12/08/2011 04:38 AM, Ben wrote: > On Thu, 8 Dec 2011, Ben wrote: > >> [...] >> Once inside the image I looked at the grub.conf files and couldn't see >> any issues. I umounted the image and tried booting into an older >> kernel and the guests booted successfully. "yum update" indicated an >> incomplete transaction so I ran "yum-complete-transaction" and then >> "yum update kernel" and rebooted both guests successfully into the new >> kernel. All now seems well. Phew. > > Note that the second command was in fact "yum reinstall kernel" not "yum > update kernel". Apologies. > > Ben -- +-----------------------------[ robin at redhat.com ]----+ | Robin Price II - RHCE,RHCDS,RHCVA | | Inside Solutions Architect | | Red Hat, Inc. | | w: +1 (919) 754 4412 | | c: +1 (252) 474 3525 | | | +---------[ http://people.redhat.com/rprice ]---------+ From kaushalshriyan at gmail.com Sat Jan 7 02:53:31 2012 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Sat, 7 Jan 2012 08:23:31 +0530 Subject: [rhelv6-list] cron anacron fcron Message-ID: Hi Can someone please help me understand the difference between cron anacron fcron and are there any linux schedulers available in Linux ? Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From o.h.weiergraeber at fz-juelich.de Sat Jan 7 07:55:56 2012 From: o.h.weiergraeber at fz-juelich.de (=?iso-8859-1?Q?Weiergr=E4ber=2C_Oliver_H=2E?=) Date: Sat, 7 Jan 2012 08:55:56 +0100 Subject: [rhelv6-list] error communicating with RHN In-Reply-To: <4150590B1B3DC84FA4ABA142CE4BC5DC02252C6D882D@MBX-CLUSTER01.ad.fz-juelich.de> References: <4150590B1B3DC84FA4ABA142CE4BC5DC02252C6D882D@MBX-CLUSTER01.ad.fz-juelich.de> Message-ID: <4150590B1B3DC84FA4ABA142CE4BC5DC02252C6D8833@MBX-CLUSTER01.ad.fz-juelich.de> In the meantime I got a little closer to the root of the problem. First of all, the issue is temporary, and package lists on RHN are updated with the next successful connection. So in this respect, there are no consequences to deal with. However, in subsequent attempts to directly install packages avilable on the web via rpm, I sometimes saw name resolution failures. When this happened, simply repeating the command was always successful! I'm on a private network using fixed IP addresses, and the DNS server entry is supposed to be the address of the router, which is supposed to handle things on its own. I never observed a problem like this on any of my machines, with RHEL or CentOS < 6.2, with the very same network setup. So the question arises whether anything has canged in 6.2 which could make DNS resolution more "unstable". Maybe just a different (too short) timeout until name resolution is considered failed...? Any ideas? Oliver ________________________________________ From: rhelv6-list-bounces at redhat.com [rhelv6-list-bounces at redhat.com] On Behalf Of Weiergr?ber, Oliver H. [o.h.weiergraeber at fz-juelich.de] Sent: Wednesday, December 28, 2011 7:18 PM To: rhelv6-list at redhat.com Subject: [rhelv6-list] error communicating with RHN Hello, following a fresh default installation of RHEL 6.2 (client) I just ran "yum update" for the first time. The system successfully downloadad and installed some 30 updates, but the processed finished with this message: There was an error communicating with RHN. Package profile information could not be sent. Error communicating with server. The message was: Name or service not known Installed products updated. Does this require attention, or does it just indicate a temporary problem on the Redhat server? Is sending of "package profile information" after installation of updates an essential operation? If so, should this information be sent manually in this case? During installation, the system was registered with RHN without any problem, and profiles have been transferred at that time. Best regards, Oliver ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From hbrown at divms.uiowa.edu Mon Jan 9 16:18:09 2012 From: hbrown at divms.uiowa.edu (Hugh Brown) Date: Mon, 09 Jan 2012 10:18:09 -0600 Subject: [rhelv6-list] cron anacron fcron In-Reply-To: References: Message-ID: <4F0B1341.5020302@divms.uiowa.edu> On 01/06/2012 08:53 PM, Kaushal Shriyan wrote: > Hi > > Can someone please help me understand the difference between cron anacron > fcron and are there any linux schedulers available in Linux ? > > Regards, > > Kaushal > > > > cron is the name of the original scheduler for unix/linux systems. There have been a variety of implementations over the years. "man cron" on your system for more information. Use "yum search cron" for a list of packages that are available on your RH6 system. anacron was written for systems that aren't always on during "off hours" when normal cron jobs were run. It behaves similarly to cron except that when a system is booted it checks to see when jobs were last run. If it has been too long since the daily/weekly/monthly jobs were last done, it will sleep for a random amount of time and then kick off those jobs. I haven't used fcron, a quick web search seems to indicate that it is trying to solve the same problems that anacron does. What do you mean by linux schedulers? Hugh From ben at sprymed.com Tue Jan 10 14:48:50 2012 From: ben at sprymed.com (Adams, Benjamin) Date: Tue, 10 Jan 2012 09:48:50 -0500 Subject: [rhelv6-list] Hardware Requirements to run in VM Message-ID: Hello, Just wondering what is the min Hardware requirements to run RH Server in a Virtual Environment. Thanks, Ben Adams --http://www.SpryMed.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Jan 10 15:09:12 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 10 Jan 2012 16:09:12 +0100 Subject: [rhelv6-list] Hardware Requirements to run in VM In-Reply-To: References: Message-ID: On Tue, Jan 10, 2012 at 3:48 PM, Adams, Benjamin wrote: > Hello, > > Just wondering what is the min Hardware requirements to run RH Server in a > Virtual Environment. > If the hypervisor is intended to be RH EL 6, you ought start with reading this: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/index.html From kaushalshriyan at gmail.com Wed Jan 11 23:57:43 2012 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 12 Jan 2012 05:27:43 +0530 Subject: [rhelv6-list] sudoers file Message-ID: Hi, Can someone please point me to documentation to setup sudoers in RHEL 6 ? Regards Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.shaw at gmail.com Thu Jan 12 01:26:29 2012 From: marco.shaw at gmail.com (Marco Shaw) Date: Wed, 11 Jan 2012 21:26:29 -0400 Subject: [rhelv6-list] sudoers file In-Reply-To: References: Message-ID: Hi, Not sure if you're just looking for the file... It is pretty well self-documented. The file is /etc/sudoers, but you can simply run the command to directly open it: # visudo Marco On Wed, Jan 11, 2012 at 7:57 PM, Kaushal Shriyan wrote: > Hi, > > Can someone please point me to documentation to setup sudoers in RHEL 6 ? > > Regards > > Kaushal > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > -- *Microsoft MVP - Windows PowerShell https://mvp.support.microsoft.com/profile/Marco.Shaw *Co-Author - Sams Windows PowerShell Unleashed 2nd Edition *Blog - http://marcoshaw.blogspot.com From gianluca.cecchi at gmail.com Thu Jan 12 15:56:16 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 12 Jan 2012 16:56:16 +0100 Subject: [rhelv6-list] KVM issues post RHEL6-1->6.2 update In-Reply-To: <4F0759BE.9080904@redhat.com> References: <4F0759BE.9080904@redhat.com> Message-ID: On Fri, Jan 6, 2012 at 9:29 PM, Robin Price II wrote: > Ben and rhelv6-list, > > Updated KVM host from RHEL 6.1 to RHEL 6.2, migrated guests fail to start > https://access.redhat.com/kb/docs/DOC-68326 It seems I have a similar problem but in the opposite direction of migration I have 3 nodes, two with 6.1 version + some updates (but below 6.2) and one with 6.2. I have a vm on 6.1 hypervisor and I'm able to live migrate it to the 6.2 host, but then I am not able to migrate from 6.2 to either one of the 6.1.... relevant components: 6.1 hosts qemu-kvm-0.12.1.2-2.160.el6_1.8.x86_64 libvirt-0.8.7-18.el6_1.1.x86_64 kernel-2.6.32-131.17.1.el6.x86_64 6.2 hosts qemu-kvm-0.12.1.2-2.209.el6_2.1.x86_64 libvirt-0.9.4-23.el6_2.1.x86_64 kernel-2.6.32-220.2.1.el6.x86_64 guest is named dacsmaster and it is rh el 5.4 the migration from 6.2 to 6.1 fails with # clusvcadm -M vm:dacsmaster -m intrarhev2 Trying to migrate vm:dacsmaster to intrarhev2...Failed; service running on original owner /var/log/messages information: on source hypervisor: Jan 12 15:45:17 rhev1 rgmanager[9267]: Migrating vm:dacsmaster to intrarhev2 Jan 12 15:45:18 rhev1 rgmanager[22193]: [vm] Migrate dacsmaster to intrarhev2 failed: Jan 12 15:45:18 rhev1 rgmanager[22215]: [vm] error: internal error missing hostuuid element in migration data Jan 12 15:45:18 rhev1 rgmanager[9267]: migrate on vm "dacsmaster" returned 150 (unspecified) Jan 12 15:45:18 rhev1 rgmanager[9267]: Migration of vm:dacsmaster to intrarhev2 failed; return code 150 on target hypervisor: Jan 12 15:45:18 rhev2 kernel: device vnet4 entered promiscuous mode Jan 12 15:45:18 rhev2 kernel: brvlan65: topology change detected, propagating Jan 12 15:45:18 rhev2 kernel: brvlan65: port 3(vnet4) entering forwarding state Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.113: 31182: warning : qemudStartVMDaemon:3336 : Executing /usr/libexec/qemu-kvm Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.119: 31182: warning : qemudStartVMDaemon:3346 : Executing done /usr/libexec/qemu-kvm Jan 12 15:45:18 rhev2 qemu-kvm: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/cpu/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/cpuacct/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/cpuset/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/memory/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/devices/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/freezer/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 libvirtd: 15:45:18.332: 8787: error : virCgroupRemoveRecursively:679 : Unable to remove /cgroup/blkio/libvirt/qemu/dacsmaster/ (16) Jan 12 15:45:18 rhev2 kernel: brvlan65: port 3(vnet4) entering disabled state Jan 12 15:45:18 rhev2 kernel: device vnet4 left promiscuous mode Jan 12 15:45:18 rhev2 kernel: brvlan65: port 3(vnet4) entering disabled state powering off the guest (through "disable" action of the related rhcs service) and powering it on on 6.1 host, I'm able to live migrate it either to the other 6.1 host and to the 6.2 one... /etc/libvirt/libvirtd.conf doesn't contain anything for the host_uuid part as I find in messages of source 6.2 hypervisor..... and dmidecode command succeeds on all the three systems. they are identical servers and for example output on one of them gives: # dmidecode -s system-uuid 34353439-3036-435A-4A38-303330393332 BTW: the contents of the KB referred seem quite limitating in flexibility during upgrade between minor versions.... Thanks in advance, Gianluca From gianluca.cecchi at gmail.com Thu Jan 12 16:56:18 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 12 Jan 2012 17:56:18 +0100 Subject: [rhelv6-list] KVM issues post RHEL6-1->6.2 update In-Reply-To: References: <4F0759BE.9080904@redhat.com> Message-ID: Some additional notes: If I freeze the vm related cluster service and on source 6.2 host I manually run the virsh command that took place in 6.1 successfully, it fails (as expected). This let me keep away considering the different release versions of cluster related components between 6.1 and 6.2 (I think...) # virsh migrate --live dacsmaster qemu+ssh://intrarhev2/system tcp:intrarhev2 error: internal error missing hostuuid element in migration data It seems I'm able to connect through virsh from rhev1 (6.2) to rhev2 (6.1) as I can run rhev1 prompt-- # virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # connect qemu+ssh://intrarhev2/system virsh # list and this command shows me the domains currently running on rhev2 as expected So I presume there is something related to migration itself and possibly different needed parameters in 6.2 shipped version of libvirt (0.9.4 vs 0.8.7) From imusayev at webmd.net Thu Jan 12 17:14:14 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 12 Jan 2012 12:14:14 -0500 Subject: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug Message-ID: Curious if anyone has seen this in their RHEL6.2 setups, if you have 6.1 or 6.2 please try this out and see what happens. List of commands to reproduce is below, latest iozone required. https://bugzilla.redhat.com/show_bug.cgi?id=773377 Description of problem: While in process of bench-marking various file systems, i came across an issue where EXT4 partition would lock up under specific load. In current setup it was under iozone testing. The I/O scheduler is set to default "noop". *This only occurs with EXT4 and not XFS. A repetitive message i'd see in messages file is attached: Jan 9 23:38:51 minigiant5 kernel: INFO: task jbd2/md0-8:16169 blocked for more than 120 seconds. Jan 9 23:38:51 minigiant5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 9 23:38:51 minigiant5 kernel: jbd2/md0-8 D 0000000000000005 0 16169 2 0x00000000 Jan 9 23:38:51 minigiant5 kernel: ffff88010153bc20 0000000000000046 0000000000000000 ffff88010153bbe4 Jan 9 23:38:51 minigiant5 kernel: 0000000000000000 ffff88012dc24c00 ffff880028215f80 0000000000000400 Jan 9 23:38:51 minigiant5 kernel: ffff880108f230f8 ffff88010153bfd8 000000000000f4e8 ffff880108f230f8 Jan 9 23:38:51 minigiant5 kernel: Call Trace: Jan 9 23:38:51 minigiant5 kernel: [] ? sync_buffer+0x0/0x50 Jan 9 23:38:51 minigiant5 kernel: [] io_schedule+0x73/0xc0 Jan 9 23:38:51 minigiant5 kernel: [] sync_buffer+0x40/0x50 Jan 9 23:38:51 minigiant5 kernel: [] __wait_on_bit+0x5f/0x90 Jan 9 23:38:51 minigiant5 kernel: [] ? sync_buffer+0x0/0x50 Jan 9 23:38:51 minigiant5 kernel: [] out_of_line_wait_on_bit+0x78/0x90 Jan 9 23:38:51 minigiant5 kernel: [] ? wake_bit_function+0x0/0x50 Jan 9 23:38:51 minigiant5 kernel: [] __wait_on_buffer+0x26/0x30 Jan 9 23:38:51 minigiant5 kernel: [] jbd2_journal_commit_transaction+0xa76/0x14b0 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] ? __switch_to+0xd0/0x320 Jan 9 23:38:51 minigiant5 kernel: [] ? try_to_del_timer_sync+0x7b/0xe0 Jan 9 23:38:51 minigiant5 kernel: [] kjournald2+0xb8/0x220 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 9 23:38:51 minigiant5 kernel: [] ? kjournald2+0x0/0x220 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] kthread+0x96/0xa0 Jan 9 23:38:51 minigiant5 kernel: [] child_rip+0xa/0x20 Jan 9 23:38:51 minigiant5 kernel: [] ? kthread+0x0/0xa0 Jan 9 23:38:51 minigiant5 kernel: [] ? child_rip+0x0/0x20 Jan 9 23:38:51 minigiant5 kernel: INFO: task iozone:17280 blocked for more than 120 seconds. Jan 9 23:38:51 minigiant5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 9 23:38:51 minigiant5 kernel: iozone D 0000000000000000 0 17280 16178 0x00000000 Jan 9 23:38:51 minigiant5 kernel: ffff880127e779a8 0000000000000086 ffff880127e77998 00000000ffffffff Jan 9 23:38:51 minigiant5 kernel: ffffffff81ea1100 0000000000000286 ffff880127e77958 ffffffff8107c99b Jan 9 23:38:51 minigiant5 kernel: ffff8800b9071b38 ffff880127e77fd8 000000000000f4e8 ffff8800b9071b38 Jan 9 23:38:51 minigiant5 kernel: Call Trace: Jan 9 23:38:51 minigiant5 kernel: [] ? try_to_del_timer_sync+0x7b/0xe0 Jan 9 23:38:51 minigiant5 kernel: [] ? prepare_to_wait+0x4e/0x80 Jan 9 23:38:51 minigiant5 kernel: [] do_get_write_access+0x29d/0x520 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] ? wake_bit_function+0x0/0x50 Jan 9 23:38:51 minigiant5 kernel: [] jbd2_journal_get_write_access+0x31/0x50 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] __ext4_journal_get_write_access+0x38/0x80 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ext4_reserve_inode_write+0x73/0xa0 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ext4_dirty_inode+0x40/0x60 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] __mark_inode_dirty+0x3b/0x160 Jan 9 23:38:51 minigiant5 kernel: [] file_update_time+0xf2/0x170 Jan 9 23:38:51 minigiant5 kernel: [] __generic_file_aio_write+0x220/0x480 Jan 9 23:38:51 minigiant5 kernel: [] ? default_wake_function+0x0/0x20 Jan 9 23:38:51 minigiant5 kernel: [] generic_file_aio_write+0x6f/0xe0 Jan 9 23:38:51 minigiant5 kernel: [] ext4_file_write+0x61/0x1e0 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ? fsnotify_add_notify_event+0x12d/0x280 Jan 9 23:38:51 minigiant5 kernel: [] do_sync_write+0xfa/0x140 Jan 9 23:38:51 minigiant5 kernel: [] ? fsnotify+0x113/0x160 Jan 9 23:38:51 minigiant5 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 9 23:38:51 minigiant5 kernel: [] ? selinux_file_permission+0xfb/0x150 Jan 9 23:38:51 minigiant5 kernel: [] ? security_file_permission+0x16/0x20 Jan 9 23:38:51 minigiant5 kernel: [] vfs_write+0xb8/0x1a0 Jan 9 23:38:51 minigiant5 kernel: [] sys_write+0x51/0x90 Jan 9 23:38:51 minigiant5 kernel: [] ? sys_lseek+0x53/0x80 Jan 9 23:38:51 minigiant5 kernel: [] system_call_fastpath+0x16/0x1b Jan 9 23:38:51 minigiant5 kernel: INFO: task iozone:17282 blocked for more than 120 seconds. Jan 9 23:38:51 minigiant5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 9 23:38:51 minigiant5 kernel: iozone D 0000000000000000 0 17282 16178 0x00000000 Jan 9 23:38:51 minigiant5 kernel: ffff880108fe59a8 0000000000000082 0000000000000000 00000000ffffffff Jan 9 23:38:51 minigiant5 kernel: ffffffff81ea1100 0000000000000286 ffff880108fe5958 ffffffff8107c99b Jan 9 23:38:51 minigiant5 kernel: ffff880128e45078 ffff880108fe5fd8 000000000000f4e8 ffff880128e45078 Jan 9 23:38:51 minigiant5 kernel: Call Trace: Jan 9 23:38:51 minigiant5 kernel: [] ? try_to_del_timer_sync+0x7b/0xe0 Jan 9 23:38:51 minigiant5 kernel: [] do_get_write_access+0x29d/0x520 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] ? wake_bit_function+0x0/0x50 Jan 9 23:38:51 minigiant5 kernel: [] jbd2_journal_get_write_access+0x31/0x50 [jbd2] Jan 9 23:38:51 minigiant5 kernel: [] __ext4_journal_get_write_access+0x38/0x80 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ext4_reserve_inode_write+0x73/0xa0 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ext4_dirty_inode+0x40/0x60 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] __mark_inode_dirty+0x3b/0x160 Jan 9 23:38:51 minigiant5 kernel: [] file_update_time+0xf2/0x170 Jan 9 23:38:51 minigiant5 kernel: [] __generic_file_aio_write+0x220/0x480 Jan 9 23:38:51 minigiant5 kernel: [] ? default_wake_function+0x0/0x20 Jan 9 23:38:51 minigiant5 kernel: [] generic_file_aio_write+0x6f/0xe0 Jan 9 23:38:51 minigiant5 kernel: [] ext4_file_write+0x61/0x1e0 [ext4] Jan 9 23:38:51 minigiant5 kernel: [] ? fsnotify_add_notify_event+0x12d/0x280 Jan 9 23:38:51 minigiant5 kernel: [] do_sync_write+0xfa/0x140 Jan 9 23:38:51 minigiant5 kernel: [] ? fsnotify+0x113/0x160 Jan 9 23:38:51 minigiant5 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 9 23:38:51 minigiant5 kernel: [] ? selinux_file_permission+0xfb/0x150 Jan 9 23:38:51 minigiant5 kernel: [] ? security_file_permission+0x16/0x20 Jan 9 23:38:51 minigiant5 kernel: [] vfs_write+0xb8/0x1a0 Jan 9 23:38:51 minigiant5 kernel: [] sys_write+0x51/0x90 Jan 9 23:38:51 minigiant5 kernel: [] system_call_fastpath+0x16/0x1b Jan 10 00:06:51 minigiant5 kernel: INFO: task jbd2/md0-8:16169 blocked for more than 120 seconds. Jan 10 00:06:51 minigiant5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 10 00:06:51 minigiant5 kernel: jbd2/md0-8 D 0000000000000005 0 16169 2 0x00000000 Jan 10 00:06:51 minigiant5 kernel: ffff88010153bc20 0000000000000046 ffff88010153bbe8 ffff88010153bbe4 Jan 10 00:06:51 minigiant5 kernel: 0000000000000001 ffff88012dc24c00 ffff8800282f5f80 0000000000000400 Jan 10 00:06:51 minigiant5 kernel: ffff880108f230f8 ffff88010153bfd8 000000000000f4e8 ffff880108f230f8 Jan 10 00:06:51 minigiant5 kernel: Call Trace: Jan 10 00:06:51 minigiant5 kernel: [] ? sync_buffer+0x0/0x50 Jan 10 00:06:51 minigiant5 kernel: [] io_schedule+0x73/0xc0 Jan 10 00:06:51 minigiant5 kernel: [] sync_buffer+0x40/0x50 Jan 10 00:06:51 minigiant5 kernel: [] __wait_on_bit+0x5f/0x90 Jan 10 00:06:51 minigiant5 kernel: [] ? sync_buffer+0x0/0x50 Jan 10 00:06:51 minigiant5 kernel: [] out_of_line_wait_on_bit+0x78/0x90 Jan 10 00:06:51 minigiant5 kernel: [] ? wake_bit_function+0x0/0x50 Jan 10 00:06:51 minigiant5 kernel: [] __wait_on_buffer+0x26/0x30 Jan 10 00:06:51 minigiant5 kernel: [] jbd2_journal_commit_transaction+0x1131/0x14b0 [jbd2] Jan 10 00:06:51 minigiant5 kernel: [] ? __switch_to+0xd0/0x320 Jan 10 00:06:51 minigiant5 kernel: [] ? try_to_del_timer_sync+0x7b/0xe0 Jan 10 00:06:51 minigiant5 kernel: [] kjournald2+0xb8/0x220 [jbd2] Jan 10 00:06:51 minigiant5 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 10 00:06:51 minigiant5 kernel: [] ? kjournald2+0x0/0x220 [jbd2] Jan 10 00:06:51 minigiant5 kernel: [] kthread+0x96/0xa0 Jan 10 00:06:51 minigiant5 kernel: [] child_rip+0xa/0x20 Jan 10 00:06:51 minigiant5 kernel: [] ? kthread+0x0/0xa0 Jan 10 00:06:51 minigiant5 kernel: [] ? child_rip+0x0/0x20 Jan 10 00:16:51 minigiant5 kernel: INFO: task jbd2/md0-8:16169 blocked for more than 120 seconds. Jan 10 00:16:51 minigiant5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jan 10 00:16:51 minigiant5 kernel: jbd2/md0-8 D 0000000000000005 0 16169 2 0x00000000 Jan 10 00:16:51 minigiant5 kernel: ffff88010153bc20 0000000000000046 ffff88010153bbe8 ffff88010153bbe4 Jan 10 00:16:51 minigiant5 kernel: 0000000000000001 ffff88012dc24c00 ffff880028235f80 0000000000000400 Jan 10 00:16:51 minigiant5 kernel: ffff880108f230f8 ffff88010153bfd8 000000000000f4e8 ffff880108f230f8 Jan 10 00:16:51 minigiant5 kernel: Call Trace: Jan 10 00:16:51 minigiant5 kernel: [] ? sync_buffer+0x0/0x50 Jan 10 00:16:51 minigiant5 kernel: [] io_schedule+0x73/0xc0 Jan 10 00:16:51 minigiant5 kernel: [] sync_buffer+0x40/0x50 Jan 10 00:16:51 minigiant5 kernel: [] __wait_on_bit+0x5f/0x90 Jan 10 00:16:51 minigiant5 kernel: [] ? sync_buffer+0x0/0x50 Jan 10 00:16:51 minigiant5 kernel: [] out_of_line_wait_on_bit+0x78/0x90 Jan 10 00:16:51 minigiant5 kernel: [] ? wake_bit_function+0x0/0x50 Jan 10 00:16:51 minigiant5 kernel: [] __wait_on_buffer+0x26/0x30 Jan 10 00:16:51 minigiant5 kernel: [] jbd2_journal_commit_transaction+0x1131/0x14b0 [jbd2] Jan 10 00:16:51 minigiant5 kernel: [] ? __switch_to+0xd0/0x320 Jan 10 00:16:51 minigiant5 kernel: [] ? try_to_del_timer_sync+0x7b/0xe0 Jan 10 00:16:51 minigiant5 kernel: [] kjournald2+0xb8/0x220 [jbd2] Jan 10 00:16:51 minigiant5 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 10 00:16:51 minigiant5 kernel: [] ? kjournald2+0x0/0x220 [jbd2] Jan 10 00:16:51 minigiant5 kernel: [] kthread+0x96/0xa0 Jan 10 00:16:51 minigiant5 kernel: [] child_rip+0xa/0x20 Jan 10 00:16:51 minigiant5 kernel: [] ? kthread+0x0/0xa0 Jan 10 00:16:51 minigiant5 kernel: [] ? child_rip+0x0/0x20 Version-Release number of selected component (if applicable): [root at minigiant5 ~]# uname -a Linux minigiant5 2.6.32-220.2.1.el6.x86_64 #1 SMP Tue Dec 13 16:21:34 EST 2011 x86_64 x86_64 x86_64 GNU/Linux [root at minigiant5 ~]# dmsetup version Library version: 1.02.66-RHEL6 (2011-10-12) Driver version: 4.22.6 2 x 1 TB SATA drives with software RAID 1. scsi-SATA_SAMSUNG_HD103SJS246J9KB510183 scsi-SATA_SAMSUNG_HD103SJS246J9KB510187 System Information Manufacturer: Dell Inc. Product Name: OptiPlex 990 RAM: 4GB CPU: Quad Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz with HT How reproducible: Always Steps to Reproduce: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 mkfs.ext4 /dev/md0 mount /dev/md0 /storage/ -o noatime iozone -R -l 5 -u 5 -r 4k -s 5000m -F /storage/f1 /storage/f2 /storage/f3 /storage/f4 /storage/f5 Actual results: Process takes much longer to complete due to EXT4 FS lockups Expected results: The partition along with the process should not lock due to FS issues Additional info: It takes a to run while but eventually locks up the partition. Ilya Musayev - Systems Architect -------------- next part -------------- An HTML attachment was scrubbed... URL: From brilong at cisco.com Thu Jan 12 18:40:34 2012 From: brilong at cisco.com (Brian Long) Date: Thu, 12 Jan 2012 13:40:34 -0500 Subject: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug In-Reply-To: References: Message-ID: <4F0F2922.8060303@cisco.com> On 1/12/12 12:14 PM, Musayev, Ilya wrote: > Curious if anyone has seen this in their RHEL6.2 setups, if you have 6.1 > or 6.2 please try this out and see what happens. List of commands to > reproduce is below, latest iozone required. > > > > https://bugzilla.redhat.com/show_bug.cgi?id=773377 I put the same kernel on my RH 6.2 workstation with a single drive and ran iozone with the same parameters. I don't have the drive mirrored and I had to change the scheduler to noop since it was cfq by default. The only partition I had with enough free space is encrypted, so kcryptd was taking 100% CPU while running iozone. Have you narrowed it down to md-only? What happens if you run the same test on just one of your drives? I got a kernel oops early on, but no ext4 errors: Jan 12 12:51:26 brilong-lnx2 kernel: ------------[ cut here ]------------ Jan 12 12:51:26 brilong-lnx2 kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) Jan 12 12:51:26 brilong-lnx2 kernel: Hardware name: IBM System x3200 -[4362PAY]- Jan 12 12:51:26 brilong-lnx2 kernel: Modules linked in: autofs4 sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipv6 ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack iptable_filter ip_tables sha256_generic cryptd aes_x86_64 aes_generic cbc dm_crypt uinput sg microcode serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support tg3 i3000_edac edac_core ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Jan 12 12:51:26 brilong-lnx2 kernel: Pid: 23, comm: kblockd/1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 12:51:26 brilong-lnx2 kernel: Call Trace: Jan 12 12:51:26 brilong-lnx2 kernel: [] ? warn_slowpath_common+0x87/0xc0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? warn_slowpath_null+0x1a/0x20 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? thread_return+0x232/0x79d Jan 12 12:51:26 brilong-lnx2 kernel: [] ? blk_unplug_work+0x0/0x70 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? blk_unplug_work+0x0/0x70 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? worker_thread+0x1fc/0x2a0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? worker_thread+0x0/0x2a0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? kthread+0x96/0xa0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? child_rip+0xa/0x20 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? kthread+0x0/0xa0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? child_rip+0x0/0x20 Jan 12 12:51:26 brilong-lnx2 kernel: ---[ end trace aeef27db2e12775f ]--- /Brian/ -- Brian Long | | Corporate Security Programs Org . | | | . | | | . ' ' C I S C O From mstevens at imt-systems.com Thu Jan 12 18:53:16 2012 From: mstevens at imt-systems.com (Morten Stevens) Date: Thu, 12 Jan 2012 19:53:16 +0100 Subject: [rhelv6-list] =?utf-8?q?Kernel_2=2E6=2E32-220=2E2=2E1_KVM_bug=3F?= Message-ID: <62b330e99254da665af02b69d485b940@imt-systems.com> Hi, After updating to 6.2 I see the following error on two different systems: Both servers are running as KVM hypvervisor with multiple linux guests. System 1) Jan 12 00:30:35 kvm-001 kernel: ------------[ cut here ]------------ Jan 12 00:30:35 kvm-001 kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) Jan 12 00:30:35 kvm-001 kernel: Hardware name: IBM System x3550 -[7978C2G]- Jan 12 00:30:35 kvm-001 kernel: Modules linked in: ebtable_nat ebtables sunrpc bridge stp llc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 vhost_net macvtap macvlan tun kvm_intel kvm bnx2 ses enclosure ibmpex ibmaem ipmi_msghandler ics932s401 sg serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support i5000_edac edac_core i5k_amb ioatdma dca shpchp ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom aacraid pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mod [last unloaded: scsi_wait_scan] Jan 12 00:30:35 kvm-001 kernel: Pid: 1572, comm: qemu-kvm Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 00:30:35 kvm-001 kernel: Call Trace: Jan 12 00:30:35 kvm-001 kernel: [] ? warn_slowpath_common+0x87/0xc0 Jan 12 00:30:35 kvm-001 kernel: [] ? warn_slowpath_null+0x1a/0x20 Jan 12 00:30:35 kvm-001 kernel: [] ? thread_return+0x232/0x79d Jan 12 00:30:35 kvm-001 kernel: [] ? vmx_vcpu_load+0x89/0x130 [kvm_intel] Jan 12 00:30:35 kvm-001 kernel: [] ? __vmx_load_host_state+0xe6/0x100 [kvm_intel] Jan 12 00:30:35 kvm-001 kernel: [] ? kvm_vcpu_block+0x75/0xc0 [kvm] Jan 12 00:30:35 kvm-001 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 12 00:30:35 kvm-001 kernel: [] ? kvm_arch_vcpu_ioctl_run+0x49c/0xf10 [kvm] Jan 12 00:30:35 kvm-001 kernel: [] ? kvm_vcpu_ioctl+0x522/0x670 [kvm] Jan 12 00:30:35 kvm-001 kernel: [] ? default_wake_function+0x12/0x20 Jan 12 00:30:35 kvm-001 kernel: [] ? pollwake+0x56/0x60 Jan 12 00:30:35 kvm-001 kernel: [] ? default_wake_function+0x0/0x20 Jan 12 00:30:35 kvm-001 kernel: [] ? __wake_up_common+0x59/0x90 Jan 12 00:30:35 kvm-001 kernel: [] ? __dequeue_entity+0x30/0x50 Jan 12 00:30:35 kvm-001 kernel: [] ? vfs_ioctl+0x22/0xa0 Jan 12 00:30:35 kvm-001 kernel: [] ? __wake_up_locked_key+0x18/0x20 Jan 12 00:30:35 kvm-001 kernel: [] ? eventfd_write+0x193/0x1d0 Jan 12 00:30:35 kvm-001 kernel: [] ? do_vfs_ioctl+0x3aa/0x580 Jan 12 00:30:35 kvm-001 kernel: [] ? sys_futex+0x7b/0x170 Jan 12 00:30:35 kvm-001 kernel: [] ? sys_ioctl+0x81/0xa0 Jan 12 00:30:35 kvm-001 kernel: [] ? system_call_fastpath+0x16/0x1b Jan 12 00:30:35 kvm-001 kernel: ---[ end trace 4b636dcb9348f551 ]--- System 2) Jan 11 23:36:47 kvm-002 kernel: ------------[ cut here ]------------ Jan 11 23:36:47 kvm-002 kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) Jan 11 23:36:47 kvm-002 kernel: Hardware name: IBM System x3550 -[7978C2G]- Jan 11 23:36:47 kvm-002 kernel: Modules linked in: ipmi_si ipmi_devintf ebtable_nat ebtables sunrpc bridge stp llc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 vhost_net macvtap macvlan tun kvm_intel kvm bnx2 ses enclosure ibmpex ibmaem ipmi_msghandler ics932s401 i2c_i801 serio_raw sg iTCO_wdt iTCO_vendor_support i5000_edac edac_core i5k_amb ioatdma dca shpchp ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom aacraid pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mod [last unloaded: scsi_wait_scan] Jan 11 23:36:47 kvm-002 kernel: Pid: 1772, comm: kvm-pit-wq Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 11 23:36:47 kvm-002 kernel: Call Trace: Jan 11 23:36:47 kvm-002 kernel: [] ? warn_slowpath_common+0x87/0xc0 Jan 11 23:36:47 kvm-002 kernel: [] ? warn_slowpath_null+0x1a/0x20 Jan 11 23:36:47 kvm-002 kernel: [] ? thread_return+0x232/0x79d Jan 11 23:36:47 kvm-002 kernel: [] ? prepare_to_wait+0x4e/0x80 Jan 11 23:36:47 kvm-002 kernel: [] ? pit_do_work+0x0/0xf0 [kvm] Jan 11 23:36:47 kvm-002 kernel: [] ? worker_thread+0x1fc/0x2a0 Jan 11 23:36:47 kvm-002 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 11 23:36:47 kvm-002 kernel: [] ? worker_thread+0x0/0x2a0 Jan 11 23:36:47 kvm-002 kernel: [] ? kthread+0x96/0xa0 Jan 11 23:36:47 kvm-002 kernel: [] ? child_rip+0xa/0x20 Jan 11 23:36:47 kvm-002 kernel: [] ? kthread+0x0/0xa0 Jan 11 23:36:47 kvm-002 kernel: [] ? child_rip+0x0/0x20 Jan 11 23:36:47 kvm-002 kernel: ---[ end trace 1d94fe6e30260f41 ]--- Any ideas? Thanks. Best regards, Morten From amyagi at gmail.com Thu Jan 12 19:36:55 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Thu, 12 Jan 2012 11:36:55 -0800 Subject: [rhelv6-list] Kernel 2.6.32-220.2.1 KVM bug? In-Reply-To: <62b330e99254da665af02b69d485b940@imt-systems.com> References: <62b330e99254da665af02b69d485b940@imt-systems.com> Message-ID: On Thu, Jan 12, 2012 at 10:53 AM, Morten Stevens wrote: > Hi, > > After updating to 6.2 I see the following error on two different systems: > > Both servers are running as KVM hypvervisor with multiple linux guests. > > System 1) > > Jan 12 00:30:35 kvm-001 kernel: ------------[ cut here ]------------ > Jan 12 00:30:35 kvm-001 kernel: WARNING: at kernel/sched.c:5914 > thread_return+0x232/0x79d() (Not tainted) > System 2) > > Jan 11 23:36:47 kvm-002 kernel: ------------[ cut here ]------------ > Jan 11 23:36:47 kvm-002 kernel: WARNING: at kernel/sched.c:5914 > thread_return+0x232/0x79d() (Not tainted) > Any ideas? Maybe related to this: https://bugzilla.redhat.com/show_bug.cgi?id=770228 Akemi From imusayev at webmd.net Thu Jan 12 19:54:09 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 12 Jan 2012 14:54:09 -0500 Subject: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug In-Reply-To: <4F0F2922.8060303@cisco.com> References: <4F0F2922.8060303@cisco.com> Message-ID: I guess I can break the raid and try again on a single drive. I will let you know what happens. Did you actually do 5000MB test with iozone? My 100MB and 1000MB are fine, only when I go into larger 5000MB range with iozone is when I start having issues. I could probably narrow it down and find the optimal break point, but I think it should not matter - as this should not happen altogether and does not occur with XFS. At this point, I'm leaning more toward XFS as I get better or on par metrics of EXT4 without any issues. I'm also curious as to why your IO scheduler was set to cfq, if I recall correctly - noop should have been default. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Brian Long Sent: Thursday, January 12, 2012 1:41 PM To: rhelv6-list at redhat.com Subject: Re: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug On 1/12/12 12:14 PM, Musayev, Ilya wrote: > Curious if anyone has seen this in their RHEL6.2 setups, if you have > 6.1 or 6.2 please try this out and see what happens. List of commands > to reproduce is below, latest iozone required. > > > > https://bugzilla.redhat.com/show_bug.cgi?id=773377 I put the same kernel on my RH 6.2 workstation with a single drive and ran iozone with the same parameters. I don't have the drive mirrored and I had to change the scheduler to noop since it was cfq by default. The only partition I had with enough free space is encrypted, so kcryptd was taking 100% CPU while running iozone. Have you narrowed it down to md-only? What happens if you run the same test on just one of your drives? I got a kernel oops early on, but no ext4 errors: Jan 12 12:51:26 brilong-lnx2 kernel: ------------[ cut here ]------------ Jan 12 12:51:26 brilong-lnx2 kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) Jan 12 12:51:26 brilong-lnx2 kernel: Hardware name: IBM System x3200 -[4362PAY]- Jan 12 12:51:26 brilong-lnx2 kernel: Modules linked in: autofs4 sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipv6 ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack iptable_filter ip_tables sha256_generic cryptd aes_x86_64 aes_generic cbc dm_crypt uinput sg microcode serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support tg3 i3000_edac edac_core ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Jan 12 12:51:26 brilong-lnx2 kernel: Pid: 23, comm: kblockd/1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 12:51:26 brilong-lnx2 kernel: Call Trace: Jan 12 12:51:26 brilong-lnx2 kernel: [] ? warn_slowpath_common+0x87/0xc0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? warn_slowpath_null+0x1a/0x20 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? thread_return+0x232/0x79d Jan 12 12:51:26 brilong-lnx2 kernel: [] ? blk_unplug_work+0x0/0x70 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? blk_unplug_work+0x0/0x70 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? worker_thread+0x1fc/0x2a0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? autoremove_wake_function+0x0/0x40 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? worker_thread+0x0/0x2a0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? kthread+0x96/0xa0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? child_rip+0xa/0x20 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? kthread+0x0/0xa0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? child_rip+0x0/0x20 Jan 12 12:51:26 brilong-lnx2 kernel: ---[ end trace aeef27db2e12775f ]--- /Brian/ -- Brian Long | | Corporate Security Programs Org . | | | . | | | . ' ' C I S C O _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From imusayev at webmd.net Thu Jan 12 20:11:07 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 12 Jan 2012 15:11:07 -0500 Subject: [rhelv6-list] redhat bugzilla - marking bugz private Message-ID: RedHat, why marking bugs as private - and denying access to 99.5% of its user base. On several occasions, BZ that I've opened became private - I never asked for it. In order for me to get the content of marked private BZ, i have to open a support case with RH and waste my time and support time on something as stupid as this. Maybe its easier to make a note on BugZilla and just say "PLEASE DONT POST PRIVATE/CONFIDENTIAL INFORMATION - THIS IS A PUBLICLY VIEWED RESOURCE"? or ask a bug reporter to make this bug private from the get go? *If you really want to make a bug private - open a redhat support case and not public BZ.* If you happen to agree with what is mentioned here, please voice your opinion - maybe someone from RedHat reads it and miracle happens. PS: Most of BZ referenced in release notes are marked private with a faint description of what could have been fixed - however since description is very brief, it makes me request BZ details over and over again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brilong at cisco.com Thu Jan 12 20:12:20 2012 From: brilong at cisco.com (Brian Long) Date: Thu, 12 Jan 2012 15:12:20 -0500 Subject: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug In-Reply-To: References: <4F0F2922.8060303@cisco.com> Message-ID: <4F0F3EA4.2090202@cisco.com> I responded too quickly. I thought it was finished and it is still running. I thought RHEL used cfq by default. /Brian/ On 1/12/12 2:54 PM, Musayev, Ilya wrote: > I guess I can break the raid and try again on a single drive. I will let you know what happens. > > Did you actually do 5000MB test with iozone? > > My 100MB and 1000MB are fine, only when I go into larger 5000MB range with iozone is when I start having issues. I could probably narrow it down and find the optimal break point, but I think it should not matter - as this should not happen altogether and does not occur with XFS. At this point, I'm leaning more toward XFS as I get better or on par metrics of EXT4 without any issues. > > I'm also curious as to why your IO scheduler was set to cfq, if I recall correctly - noop should have been default. > > -----Original Message----- > From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Brian Long > Sent: Thursday, January 12, 2012 1:41 PM > To: rhelv6-list at redhat.com > Subject: Re: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug > > On 1/12/12 12:14 PM, Musayev, Ilya wrote: >> Curious if anyone has seen this in their RHEL6.2 setups, if you have >> 6.1 or 6.2 please try this out and see what happens. List of commands >> to reproduce is below, latest iozone required. >> >> >> >> https://bugzilla.redhat.com/show_bug.cgi?id=773377 > > I put the same kernel on my RH 6.2 workstation with a single drive and ran iozone with the same parameters. I don't have the drive mirrored and I had to change the scheduler to noop since it was cfq by default. > > The only partition I had with enough free space is encrypted, so kcryptd was taking 100% CPU while running iozone. Have you narrowed it down to md-only? What happens if you run the same test on just one of your drives? > > I got a kernel oops early on, but no ext4 errors: > Jan 12 12:51:26 brilong-lnx2 kernel: ------------[ cut here ]------------ Jan 12 12:51:26 brilong-lnx2 kernel: WARNING: at kernel/sched.c:5914 > thread_return+0x232/0x79d() (Not tainted) Jan 12 12:51:26 brilong-lnx2 kernel: Hardware name: IBM System x3200 > -[4362PAY]- > Jan 12 12:51:26 brilong-lnx2 kernel: Modules linked in: autofs4 sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipv6 ipt_REJECT > nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack iptable_filter ip_tables sha256_generic cryptd aes_x86_64 aes_generic cbc dm_crypt uinput sg microcode serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support tg3 i3000_edac edac_core ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last > unloaded: scsi_wait_scan] > Jan 12 12:51:26 brilong-lnx2 kernel: Pid: 23, comm: kblockd/1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 12:51:26 brilong-lnx2 kernel: Call Trace: > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > warn_slowpath_common+0x87/0xc0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > warn_slowpath_null+0x1a/0x20 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > thread_return+0x232/0x79d > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > blk_unplug_work+0x0/0x70 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > blk_unplug_work+0x0/0x70 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > worker_thread+0x1fc/0x2a0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > autoremove_wake_function+0x0/0x40 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > worker_thread+0x0/0x2a0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > kthread+0x96/0xa0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > child_rip+0xa/0x20 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? kthread+0x0/0xa0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > child_rip+0x0/0x20 > Jan 12 12:51:26 brilong-lnx2 kernel: ---[ end trace aeef27db2e12775f ]--- > > /Brian/ -- Brian Long | | Corporate Security Programs Org . | | | . | | | . ' ' C I S C O From linux at alteeve.com Thu Jan 12 20:25:13 2012 From: linux at alteeve.com (Digimer) Date: Thu, 12 Jan 2012 15:25:13 -0500 Subject: [rhelv6-list] redhat bugzilla - marking bugz private In-Reply-To: References: Message-ID: <4F0F41A9.5080803@alteeve.com> On 01/12/2012 03:11 PM, Musayev, Ilya wrote: > RedHat, why marking bugs as private - and denying access to 99.5% of its > user base. On several occasions, BZ that I?ve opened became private ? I > never asked for it. These are usually bugs with private client data in it. Red Hat has a duty to protect the privacy of those clients, so it's a reasonable default position to make those bugs private. > In order for me to get the content of marked private BZ, i have to open > a support case with RH and waste my time and support time on something > as stupid as this. It's not really stupid when you consider privacy. > Maybe its easier to make a note on BugZilla and just say "PLEASE DONT > POST PRIVATE/CONFIDENTIAL INFORMATION ? THIS IS A PUBLICLY VIEWED > RESOURCE"? or ask a bug reporter to make this bug private from the get go? > > *If you really want to make a bug private ? open a redhat support case > and not public BZ.* > > If you happen to agree with what is mentioned here, please voice your > opinion ? maybe someone from RedHat reads it and miracle happens. > > PS: Most of BZ referenced in release notes are marked private with a > faint description of what could have been fixed ? however since > description is very brief, it makes me request BZ details over and over > again. As a user, I admit it can be frustrating to follow a link to a closed bug. However, I understand their position. When they identify a problem, they generally create a public page explaining the fault, it's fix and what versions of RPMs the problem was resolved in. Personally, I think this is reasonable. -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From mstevens at imt-systems.com Thu Jan 12 20:37:09 2012 From: mstevens at imt-systems.com (Morten Stevens) Date: Thu, 12 Jan 2012 21:37:09 +0100 Subject: [rhelv6-list] =?utf-8?q?Kernel_2=2E6=2E32-220=2E2=2E1_KVM_bug=3F?= In-Reply-To: References: <62b330e99254da665af02b69d485b940@imt-systems.com> Message-ID: On 12.01.2012 20:36, Akemi Yagi wrote: > On Thu, Jan 12, 2012 at 10:53 AM, Morten Stevens > wrote: >> Hi, >> >> After updating to 6.2 I see the following error on two different >> systems: >> >> Both servers are running as KVM hypvervisor with multiple linux >> guests. >> >> System 1) >> >> Jan 12 00:30:35 kvm-001 kernel: ------------[ cut here ]------------ >> Jan 12 00:30:35 kvm-001 kernel: WARNING: at kernel/sched.c:5914 >> thread_return+0x232/0x79d() (Not tainted) > >> System 2) >> >> Jan 11 23:36:47 kvm-002 kernel: ------------[ cut here ]------------ >> Jan 11 23:36:47 kvm-002 kernel: WARNING: at kernel/sched.c:5914 >> thread_return+0x232/0x79d() (Not tainted) > >> Any ideas? > > Maybe related to this: > > https://bugzilla.redhat.com/show_bug.cgi?id=770228 Hi Akemi, Yes, you're right. I see the same bug on kvm guests with kernel 2.6.32-220.2.1. For example: Jan 12 18:10:58 mail kernel: ------------[ cut here ]------------ Jan 12 18:10:58 mail kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) Jan 12 18:10:58 mail kernel: Hardware name: KVM Jan 12 18:10:58 mail kernel: Modules linked in: xt_multiport ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 virtio_balloon virtio_net i2c_piix4 i2c_core sg ext4 mbcache jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mod [last unloaded: scsi_wait_scan] Jan 12 18:10:58 mail kernel: Pid: 19535, comm: imap-login Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 18:10:58 mail kernel: Call Trace: Jan 12 18:10:58 mail kernel: [] ? warn_slowpath_common+0x87/0xc0 Jan 12 18:10:58 mail kernel: [] ? warn_slowpath_null+0x1a/0x20 Jan 12 18:10:58 mail kernel: [] ? thread_return+0x232/0x79d Jan 12 18:10:58 mail kernel: [] ? schedule_timeout+0x192/0x2e0 Jan 12 18:10:58 mail kernel: [] ? process_timeout+0x0/0x10 Jan 12 18:10:58 mail kernel: [] ? sys_epoll_wait+0x239/0x300 Jan 12 18:10:58 mail kernel: [] ? default_wake_function+0x0/0x20 Jan 12 18:10:58 mail kernel: [] ? system_call_fastpath+0x16/0x1b Jan 12 18:10:58 mail kernel: ---[ end trace f10d5daaadbce260 ]--- Jan 12 17:58:07 acsinet20 kernel: ------------[ cut here ]------------ Jan 12 17:58:07 acsinet20 kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) Jan 12 17:58:07 acsinet20 kernel: Hardware name: KVM Jan 12 17:58:07 acsinet20 kernel: Modules linked in: ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 virtio_balloon virtio_net i2c_piix4 i2c_core sg ext4 mbcache jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mod [last unloaded: scsi_wait_scan] Jan 12 17:58:07 acsinet20 kernel: Pid: 1268, comm: fail2ban-server Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 17:58:07 acsinet20 kernel: Call Trace: Jan 12 17:58:07 acsinet20 kernel: [] ? warn_slowpath_common+0x87/0xc0 Jan 12 17:58:07 acsinet20 kernel: [] ? warn_slowpath_null+0x1a/0x20 Jan 12 17:58:07 acsinet20 kernel: [] ? thread_return+0x232/0x79d Jan 12 17:58:07 acsinet20 kernel: [] ? __hrtimer_start_range_ns+0x1a3/0x460 Jan 12 17:58:07 acsinet20 kernel: [] ? lock_hrtimer_base+0x31/0x60 Jan 12 17:58:07 acsinet20 kernel: [] ? pvclock_clocksource_read+0x5c/0xd0 Jan 12 17:58:07 acsinet20 kernel: [] ? schedule_hrtimeout_range+0xc8/0x160 Jan 12 17:58:07 acsinet20 kernel: [] ? hrtimer_wakeup+0x0/0x30 Jan 12 17:58:07 acsinet20 kernel: [] ? hrtimer_start_range_ns+0x14/0x20 Jan 12 17:58:07 acsinet20 kernel: [] ? poll_schedule_timeout+0x39/0x60 Jan 12 17:58:07 acsinet20 kernel: [] ? do_select+0x578/0x6b0 Jan 12 17:58:07 acsinet20 kernel: [] ? invalidate_interrupt0+0xe/0x20 Jan 12 17:58:07 acsinet20 kernel: [] ? __pollwait+0x0/0xf0 Jan 12 17:58:07 acsinet20 kernel: [] ? flat_send_IPI_mask+0x68/0x90 Jan 12 17:58:07 acsinet20 kernel: [] ? flush_tlb_others_ipi+0x128/0x130 Jan 12 17:58:07 acsinet20 kernel: [] ? native_flush_tlb_others+0x76/0x90 Jan 12 17:58:07 acsinet20 kernel: [] ? cpumask_any_but+0x31/0x50 Jan 12 17:58:07 acsinet20 kernel: [] ? flush_tlb_page+0x48/0xb0 Jan 12 17:58:07 acsinet20 kernel: [] ? ptep_set_access_flags+0x6d/0x70 Jan 12 17:58:07 acsinet20 kernel: [] ? do_wp_page+0x44b/0x8d0 Jan 12 17:58:07 acsinet20 kernel: [] ? handle_pte_fault+0x2cd/0xb50 Jan 12 17:58:07 acsinet20 kernel: [] ? apic_timer_interrupt+0xe/0x20 Jan 12 17:58:07 acsinet20 kernel: [] ? apic_timer_interrupt+0xe/0x20 Jan 12 17:58:07 acsinet20 kernel: [] ? core_sys_select+0x18a/0x2c0 Jan 12 17:58:07 acsinet20 kernel: [] ? handle_mm_fault+0x1e4/0x2b0 Jan 12 17:58:07 acsinet20 kernel: [] ? call_rcu+0xe/0x10 Jan 12 17:58:07 acsinet20 kernel: [] ? pvclock_clocksource_read+0x58/0xd0 Jan 12 17:58:07 acsinet20 kernel: [] ? pvclock_clocksource_read+0x58/0xd0 Jan 12 17:58:07 acsinet20 kernel: [] ? kvm_clock_read+0x1c/0x20 Jan 12 17:58:07 acsinet20 kernel: [] ? kvm_clock_get_cycles+0x9/0x10 Jan 12 17:58:07 acsinet20 kernel: [] ? ktime_get_ts+0xa9/0xe0 Jan 12 17:58:07 acsinet20 kernel: [] ? sys_select+0x47/0x110 Jan 12 17:58:07 acsinet20 kernel: [] ? system_call_fastpath+0x16/0x1b Jan 12 17:58:07 acsinet20 kernel: ---[ end trace 3d099e90a36a28d0 ]--- Best regards, Morten From rprice at redhat.com Thu Jan 12 20:56:51 2012 From: rprice at redhat.com (Robin Price II) Date: Thu, 12 Jan 2012 15:56:51 -0500 Subject: [rhelv6-list] Kernel 2.6.32-220.2.1 KVM bug? In-Reply-To: <62b330e99254da665af02b69d485b940@imt-systems.com> References: <62b330e99254da665af02b69d485b940@imt-systems.com> Message-ID: <4F0F4913.8060303@redhat.com> Searching the kbase, it looks like we have something for customer to reference, hope this helps. https://access.redhat.com/kb/docs/DOC-68014 ~rp On 01/12/2012 01:53 PM, Morten Stevens wrote: > Hi, > > After updating to 6.2 I see the following error on two different systems: > > Both servers are running as KVM hypvervisor with multiple linux guests. > > System 1) > > Jan 12 00:30:35 kvm-001 kernel: ------------[ cut here ]------------ > Jan 12 00:30:35 kvm-001 kernel: WARNING: at kernel/sched.c:5914 > thread_return+0x232/0x79d() (Not tainted) > Jan 12 00:30:35 kvm-001 kernel: Hardware name: IBM System x3550 -[7978C2G]- > Jan 12 00:30:35 kvm-001 kernel: Modules linked in: ebtable_nat ebtables > sunrpc bridge stp llc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 > iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 > xt_state nf_conntrack ip6table_filter ip6_tables ipv6 vhost_net macvtap > macvlan tun kvm_intel kvm bnx2 ses enclosure ibmpex ibmaem > ipmi_msghandler ics932s401 sg serio_raw i2c_i801 iTCO_wdt > iTCO_vendor_support i5000_edac edac_core i5k_amb ioatdma dca shpchp ext4 > mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom aacraid pata_acpi > ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core > dm_mod [last unloaded: scsi_wait_scan] > Jan 12 00:30:35 kvm-001 kernel: Pid: 1572, comm: qemu-kvm Not tainted > 2.6.32-220.2.1.el6.x86_64 #1 > Jan 12 00:30:35 kvm-001 kernel: Call Trace: > Jan 12 00:30:35 kvm-001 kernel: [] ? > warn_slowpath_common+0x87/0xc0 > Jan 12 00:30:35 kvm-001 kernel: [] ? > warn_slowpath_null+0x1a/0x20 > Jan 12 00:30:35 kvm-001 kernel: [] ? > thread_return+0x232/0x79d > Jan 12 00:30:35 kvm-001 kernel: [] ? > vmx_vcpu_load+0x89/0x130 [kvm_intel] > Jan 12 00:30:35 kvm-001 kernel: [] ? > __vmx_load_host_state+0xe6/0x100 [kvm_intel] > Jan 12 00:30:35 kvm-001 kernel: [] ? > kvm_vcpu_block+0x75/0xc0 [kvm] > Jan 12 00:30:35 kvm-001 kernel: [] ? > autoremove_wake_function+0x0/0x40 > Jan 12 00:30:35 kvm-001 kernel: [] ? > kvm_arch_vcpu_ioctl_run+0x49c/0xf10 [kvm] > Jan 12 00:30:35 kvm-001 kernel: [] ? > kvm_vcpu_ioctl+0x522/0x670 [kvm] > Jan 12 00:30:35 kvm-001 kernel: [] ? > default_wake_function+0x12/0x20 > Jan 12 00:30:35 kvm-001 kernel: [] ? pollwake+0x56/0x60 > Jan 12 00:30:35 kvm-001 kernel: [] ? > default_wake_function+0x0/0x20 > Jan 12 00:30:35 kvm-001 kernel: [] ? > __wake_up_common+0x59/0x90 > Jan 12 00:30:35 kvm-001 kernel: [] ? > __dequeue_entity+0x30/0x50 > Jan 12 00:30:35 kvm-001 kernel: [] ? vfs_ioctl+0x22/0xa0 > Jan 12 00:30:35 kvm-001 kernel: [] ? > __wake_up_locked_key+0x18/0x20 > Jan 12 00:30:35 kvm-001 kernel: [] ? > eventfd_write+0x193/0x1d0 > Jan 12 00:30:35 kvm-001 kernel: [] ? > do_vfs_ioctl+0x3aa/0x580 > Jan 12 00:30:35 kvm-001 kernel: [] ? sys_futex+0x7b/0x170 > Jan 12 00:30:35 kvm-001 kernel: [] ? sys_ioctl+0x81/0xa0 > Jan 12 00:30:35 kvm-001 kernel: [] ? > system_call_fastpath+0x16/0x1b > Jan 12 00:30:35 kvm-001 kernel: ---[ end trace 4b636dcb9348f551 ]--- > > System 2) > > Jan 11 23:36:47 kvm-002 kernel: ------------[ cut here ]------------ > Jan 11 23:36:47 kvm-002 kernel: WARNING: at kernel/sched.c:5914 > thread_return+0x232/0x79d() (Not tainted) > Jan 11 23:36:47 kvm-002 kernel: Hardware name: IBM System x3550 -[7978C2G]- > Jan 11 23:36:47 kvm-002 kernel: Modules linked in: ipmi_si ipmi_devintf > ebtable_nat ebtables sunrpc bridge stp llc ipt_REJECT nf_conntrack_ipv4 > nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 > nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 > vhost_net macvtap macvlan tun kvm_intel kvm bnx2 ses enclosure ibmpex > ibmaem ipmi_msghandler ics932s401 i2c_i801 serio_raw sg iTCO_wdt > iTCO_vendor_support i5000_edac edac_core i5k_amb ioatdma dca shpchp ext4 > mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom aacraid pata_acpi > ata_generic ata_piix radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core > dm_mod [last unloaded: scsi_wait_scan] > Jan 11 23:36:47 kvm-002 kernel: Pid: 1772, comm: kvm-pit-wq Not tainted > 2.6.32-220.2.1.el6.x86_64 #1 > Jan 11 23:36:47 kvm-002 kernel: Call Trace: > Jan 11 23:36:47 kvm-002 kernel: [] ? > warn_slowpath_common+0x87/0xc0 > Jan 11 23:36:47 kvm-002 kernel: [] ? > warn_slowpath_null+0x1a/0x20 > Jan 11 23:36:47 kvm-002 kernel: [] ? > thread_return+0x232/0x79d > Jan 11 23:36:47 kvm-002 kernel: [] ? > prepare_to_wait+0x4e/0x80 > Jan 11 23:36:47 kvm-002 kernel: [] ? > pit_do_work+0x0/0xf0 [kvm] > Jan 11 23:36:47 kvm-002 kernel: [] ? > worker_thread+0x1fc/0x2a0 > Jan 11 23:36:47 kvm-002 kernel: [] ? > autoremove_wake_function+0x0/0x40 > Jan 11 23:36:47 kvm-002 kernel: [] ? > worker_thread+0x0/0x2a0 > Jan 11 23:36:47 kvm-002 kernel: [] ? kthread+0x96/0xa0 > Jan 11 23:36:47 kvm-002 kernel: [] ? child_rip+0xa/0x20 > Jan 11 23:36:47 kvm-002 kernel: [] ? kthread+0x0/0xa0 > Jan 11 23:36:47 kvm-002 kernel: [] ? child_rip+0x0/0x20 > Jan 11 23:36:47 kvm-002 kernel: ---[ end trace 1d94fe6e30260f41 ]--- > > Any ideas? > > Thanks. > > Best regards, > > Morten > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list -- +-----------------------------[ robin at redhat.com ]----+ | Robin Price II - RHCE,RHCDS,RHCVA | | Inside Solutions Architect | | Red Hat, Inc. | | w: +1 (919) 754 4412 | | c: +1 (252) 474 3525 | | | +---------[ http://people.redhat.com/rprice ]---------+ From hescominsoon at emmanuelcomputerconsulting.com Thu Jan 12 21:30:30 2012 From: hescominsoon at emmanuelcomputerconsulting.com (William Warren) Date: Thu, 12 Jan 2012 16:30:30 -0500 Subject: [rhelv6-list] Restricting bugzillas Message-ID: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> RedHat used to leave its bugzilla mostly open. I am seeing more and more closed bugzilla entries even to registered accounts. Is this part of trying to hide things from oracle? From imusayev at webmd.net Thu Jan 12 21:56:49 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 12 Jan 2012 16:56:49 -0500 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> Message-ID: I don't know how much they going to hide - there are many ways to get the info if you are oracle - all you need is a 3rd party/proxy. BTW, its not only oracle, its also suse, centos, scientific linux and many others. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of William Warren Sent: Thursday, January 12, 2012 4:31 PM To: rhelv6-list at redhat.com Subject: [rhelv6-list] Restricting bugzillas RedHat used to leave its bugzilla mostly open. I am seeing more and more closed bugzilla entries even to registered accounts. Is this part of trying to hide things from oracle? _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From hescominsoon at emmanuelcomputerconsulting.com Thu Jan 12 22:05:13 2012 From: hescominsoon at emmanuelcomputerconsulting.com (William Warren) Date: Thu, 12 Jan 2012 17:05:13 -0500 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> Message-ID: <4F0F5919.9080504@emmanuelcomputerconsulting.com> so when it comes down to it RedHat now sees the rebuilds as competition...despite multiple statements contrary..:) Tis ok..if that's the way they feel they need to be so be it..:) On 1/12/2012 4:56 PM, Musayev, Ilya wrote: > I don't know how much they going to hide - there are many ways to get the info if you are oracle - all you need is a 3rd party/proxy. > > BTW, its not only oracle, its also suse, centos, scientific linux and many others. > > -----Original Message----- > From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of William Warren > Sent: Thursday, January 12, 2012 4:31 PM > To: rhelv6-list at redhat.com > Subject: [rhelv6-list] Restricting bugzillas > > RedHat used to leave its bugzilla mostly open. I am seeing more and more closed bugzilla entries even to registered accounts. Is this part of trying to hide things from oracle? > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list From geslinux at gmail.com Thu Jan 12 22:19:59 2012 From: geslinux at gmail.com (Gescape) Date: Thu, 12 Jan 2012 22:19:59 +0000 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> Message-ID: <1326406799.2407.9.camel@aspire.localdomain> In my opinion, even closing the KB by Red Hat, I mean you need to login to access the articles, was and is not good. If not open source, Linux and GNU, probably there would not be Red Hat today. Saying that they created Fedora to give something back to community would not be "politically" correct too, I think. Without community and open source there would not be Fedora too. I think they should try to make it even more open rather that closing it further. More knowledge you can get about the product, more likely you would go for it, I think. If there is a bug, someone will find it anyway sooner or later, so I think it is better to say "yes, we know and we work on it" and even "maybe you can even help us to fix it quicker" rather than pretend that "we did not know" when competition fixes that already. Ges On Thu, 2012-01-12 at 16:56 -0500, Musayev, Ilya wrote: > I don't know how much they going to hide - there are many ways to get the info if you are oracle - all you need is a 3rd party/proxy. > > BTW, its not only oracle, its also suse, centos, scientific linux and many others. > > -----Original Message----- > From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of William Warren > Sent: Thursday, January 12, 2012 4:31 PM > To: rhelv6-list at redhat.com > Subject: [rhelv6-list] Restricting bugzillas > > RedHat used to leave its bugzilla mostly open. I am seeing more and more closed bugzilla entries even to registered accounts. Is this part of trying to hide things from oracle? > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list From imusayev at webmd.net Thu Jan 12 22:47:08 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 12 Jan 2012 17:47:08 -0500 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <1326406799.2407.9.camel@aspire.localdomain> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <1326406799.2407.9.camel@aspire.localdomain> Message-ID: Ges, I'm with you. But RedHat pays people to do QA and Dev in order to address the bug. Once RedHat makes a release, all free loaders jump in and copy and paste their code (minus RedHat TM). I do support the latest move of RHEL6 source code release changes - it makes it harder (though not by much) for oracle to rip and integrate, I don't see how hiding the bugz in bugzilla benefits RedHat. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Gescape Sent: Thursday, January 12, 2012 5:20 PM To: rhelv6-list at redhat.com Subject: Re: [rhelv6-list] Restricting bugzillas In my opinion, even closing the KB by Red Hat, I mean you need to login to access the articles, was and is not good. If not open source, Linux and GNU, probably there would not be Red Hat today. Saying that they created Fedora to give something back to community would not be "politically" correct too, I think. Without community and open source there would not be Fedora too. I think they should try to make it even more open rather that closing it further. More knowledge you can get about the product, more likely you would go for it, I think. If there is a bug, someone will find it anyway sooner or later, so I think it is better to say "yes, we know and we work on it" and even "maybe you can even help us to fix it quicker" rather than pretend that "we did not know" when competition fixes that already. Ges On Thu, 2012-01-12 at 16:56 -0500, Musayev, Ilya wrote: > I don't know how much they going to hide - there are many ways to get the info if you are oracle - all you need is a 3rd party/proxy. > > BTW, its not only oracle, its also suse, centos, scientific linux and many others. > > -----Original Message----- > From: rhelv6-list-bounces at redhat.com > [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of William Warren > Sent: Thursday, January 12, 2012 4:31 PM > To: rhelv6-list at redhat.com > Subject: [rhelv6-list] Restricting bugzillas > > RedHat used to leave its bugzilla mostly open. I am seeing more and more closed bugzilla entries even to registered accounts. Is this part of trying to hide things from oracle? > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From cmadams at hiwaay.net Thu Jan 12 23:12:55 2012 From: cmadams at hiwaay.net (Chris Adams) Date: Thu, 12 Jan 2012 17:12:55 -0600 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> Message-ID: <20120112231255.GA12083@hiwaay.net> Once upon a time, William Warren said: > RedHat used to leave its bugzilla mostly open. I am seeing more and > more closed bugzilla entries even to registered accounts. Is this part > of trying to hide things from oracle? I think it is more that they actually use BZ more for customer-related stuff than they used to, and a bug has customer-related information, it will be private. -- Chris Adams Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble. From mlist at lubrical.net Thu Jan 12 23:55:07 2012 From: mlist at lubrical.net (Tim) Date: Fri, 13 Jan 2012 10:55:07 +1100 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <20120112231255.GA12083@hiwaay.net> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <20120112231255.GA12083@hiwaay.net> Message-ID: <9bd00ea57ebc4bbaeb666e366ebc19fe.squirrel@lubrical.net> > Once upon a time, William Warren > said: >> RedHat used to leave its bugzilla mostly open. I am seeing more and >> more closed bugzilla entries even to registered accounts. Is this part >> of trying to hide things from oracle? > > I think it is more that they actually use BZ more for customer-related > stuff than they used to, and a bug has customer-related information, it > will be private. > -- > Chris Adams > Systems and Network Administrator - HiWAAY Internet Services > I don't speak for anybody but myself - that's enough trouble. > I'm with Chris on this one. I don't think RedHat are making BZ entries private to hide bugs from "competitors". I suspect it's because most of the bugs are logged by customers and contain private customer data that the customer would prefer not to be made public. In an ideal word RedHat would make a new public bug with just the description and no customer data but suspect that that would be impractical. If you really want to know about a bug; call your TAM or log a support case. I'm sure will give provide the details you require. -- Tim From mail-lists at karan.org Fri Jan 13 01:00:09 2012 From: mail-lists at karan.org (Karanbir Singh) Date: Fri, 13 Jan 2012 01:00:09 +0000 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <9bd00ea57ebc4bbaeb666e366ebc19fe.squirrel@lubrical.net> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <20120112231255.GA12083@hiwaay.net> <9bd00ea57ebc4bbaeb666e366ebc19fe.squirrel@lubrical.net> Message-ID: <4F0F8219.3060204@karan.org> On 01/12/2012 11:55 PM, Tim wrote: > I'm with Chris on this one. I don't think RedHat are making BZ entries > private to hide bugs from "competitors". I suspect it's because most of > the bugs are logged by customers and contain private customer data that > the customer would prefer not to be made public. ... would it then not also be great if just that specific info be removed/replaced/anonymous rather than have the entire issue blocked away ? This might not always be possible, but in a large number of cases I'm guessing that it can be done. Would that also somewhat reduce the number of dupes being filed ? > In an ideal word RedHat would make a new public bug with just the > description and no customer data but suspect that that would be > impractical. I agree, people who maintain the Bz's often tend to be the people doing the work behind it as well, keeping things to as basic a level as possible for them is good! > If you really want to know about a bug; call your TAM or log a support > case. I'm sure will give provide the details you require. Also, once a specific bit of code that has resolution/workaround/suppression of an issue - the corresponding issue should be public and available. That does not always happen, and it would be nice if it did. -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh ICQ: 2522219 | Yahoo IM: z00dax | Gtalk: z00dax GnuPG Key : http://www.karan.org/publickey.asc From rda at rincon.com Fri Jan 13 05:27:46 2012 From: rda at rincon.com (Bob Arendt) Date: Thu, 12 Jan 2012 22:27:46 -0700 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <9bd00ea57ebc4bbaeb666e366ebc19fe.squirrel@lubrical.net> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <20120112231255.GA12083@hiwaay.net> <9bd00ea57ebc4bbaeb666e366ebc19fe.squirrel@lubrical.net> Message-ID: <4F0FC0D2.1030804@rincon.com> On 01/12/2012 04:55 PM, Tim wrote: >> Once upon a time, William Warren >> said: >>> RedHat used to leave its bugzilla mostly open. I am seeing more and >>> more closed bugzilla entries even to registered accounts. Is this part >>> of trying to hide things from oracle? >> >> I think it is more that they actually use BZ more for customer-related >> stuff than they used to, and a bug has customer-related information, it >> will be private. >> -- >> Chris Adams >> Systems and Network Administrator - HiWAAY Internet Services >> I don't speak for anybody but myself - that's enough trouble. >> > > I'm with Chris on this one. I don't think RedHat are making BZ entries > private to hide bugs from "competitors". I suspect it's because most of > the bugs are logged by customers and contain private customer data that > the customer would prefer not to be made public. > > In an ideal word RedHat would make a new public bug with just the > description and no customer data but suspect that that would be > impractical. > > If you really want to know about a bug; call your TAM or log a support > case. I'm sure will give provide the details you require. > Whenever I log a support case, the corresponding BZ that RedHat files is marked private. I believe that they are assuming that any customer info submitted is potentially confidential, unless explicitly told otherwise; That's the cautious thing to do. To avoid this, I now always file a BZ first, and refer to it in the support case that I file immediately after. This ensures that the BZ is public, and RedHat can assume that it's not private information since I published it in an open forum. From rprice at redhat.com Fri Jan 13 15:06:43 2012 From: rprice at redhat.com (Robin Price II) Date: Fri, 13 Jan 2012 10:06:43 -0500 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> Message-ID: <4F104883.6030406@redhat.com> List, I don't know how much I can go into the details but I can assure you, being apart of GSS, the major concern has been with customer privacy. I would take the suggestions from this list of contacting support or your TAM and seeing if they can give you the information in your case. Also, every case we get should be worked and logged. Many of the issues are being captured and have a kbase assigned to it. There is a very high chance that the BZ you are looking for has a kbase made for it as well. If you are looking for details in a BZ, try looking in our knowledge base also. GSS has done an amazing job capturing these cases and putting the work that was done towards the resolution (even if its being tracked in a BZ) into a Knowledge Base article for all customers to reference. Hope this helps and TGIF. :) ~rp On 01/12/2012 04:30 PM, William Warren wrote: > RedHat used to leave its bugzilla mostly open. I am seeing more and more > closed bugzilla entries even to registered accounts. Is this part of > trying to hide things from oracle? > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list -- +-----------------------------[ robin at redhat.com ]----+ | Robin Price II - RHCE,RHCDS,RHCVA | | Inside Solutions Architect | | Red Hat, Inc. | | w: +1 (919) 754 4412 | | c: +1 (252) 474 3525 | | | +---------[ http://people.redhat.com/rprice ]---------+ From geslinux at gmail.com Fri Jan 13 20:58:38 2012 From: geslinux at gmail.com (Grzegorz Witkowski) Date: Fri, 13 Jan 2012 20:58:38 +0000 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <4F0FC0D2.1030804@rincon.com> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <20120112231255.GA12083@hiwaay.net> <9bd00ea57ebc4bbaeb666e366ebc19fe.squirrel@lubrical.net> <4F0FC0D2.1030804@rincon.com> Message-ID: On Fri, Jan 13, 2012 at 5:27 AM, Bob Arendt wrote: > On 01/12/2012 04:55 PM, Tim wrote: > >> Once upon a time, William Warren >>> >>> **> said: >>> >>> RedHat used to leave its bugzilla mostly open. I am seeing more and >>>> more closed bugzilla entries even to registered accounts. Is this part >>>> of trying to hide things from oracle? >>>> >>> >>> I think it is more that they actually use BZ more for customer-related >>> stuff than they used to, and a bug has customer-related information, it >>> will be private. >>> -- >>> Chris Adams >>> Systems and Network Administrator - HiWAAY Internet Services >>> I don't speak for anybody but myself - that's enough trouble. >>> >>> >> I'm with Chris on this one. I don't think RedHat are making BZ entries >> private to hide bugs from "competitors". I suspect it's because most of >> the bugs are logged by customers and contain private customer data that >> the customer would prefer not to be made public. >> >> In an ideal word RedHat would make a new public bug with just the >> description and no customer data but suspect that that would be >> impractical. >> >> If you really want to know about a bug; call your TAM or log a support >> case. I'm sure will give provide the details you require. >> >> Whenever I log a support case, the corresponding BZ that RedHat files is > marked private. I believe that they are assuming that any customer info > submitted is potentially confidential, unless explicitly told otherwise; > That's the cautious thing to do. > > To avoid this, I now always file a BZ first, and refer to it in the > support case that I file immediately after. This ensures that the BZ > is public, and RedHat can assume that it's not private information > since I published it in an open forum. > > > ______________________________**_________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/**mailman/listinfo/rhelv6-list > Very good... wise and smart I'd say :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hescominsoon at emmanuelcomputerconsulting.com Fri Jan 13 21:22:50 2012 From: hescominsoon at emmanuelcomputerconsulting.com (William Warren) Date: Fri, 13 Jan 2012 16:22:50 -0500 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <4F104883.6030406@redhat.com> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <4F104883.6030406@redhat.com> Message-ID: That doesn't explain the sudden shift in marking BZ private by default for an open source product. Ever since the current CEO came in RH has been getting more and ore secretive and this goes against the spirit of Open Source. On Fri, Jan 13, 2012 at 10:06 AM, Robin Price II wrote: > List, > > I don't know how much I can go into the details but I can assure you, > being apart of GSS, the major concern has been with customer privacy. I > would take the suggestions from this list of contacting support or your TAM > and seeing if they can give you the information in your case. > > Also, every case we get should be worked and logged. Many of the issues > are being captured and have a kbase assigned to it. There is a very high > chance that the BZ you are looking for has a kbase made for it as well. If > you are looking for details in a BZ, try looking in our knowledge base > also. GSS has done an amazing job capturing these cases and putting the > work that was done towards the resolution (even if its being tracked in a > BZ) into a Knowledge Base article for all customers to reference. > > Hope this helps and TGIF. :) > > > ~rp > > > > On 01/12/2012 04:30 PM, William Warren wrote: > >> RedHat used to leave its bugzilla mostly open. I am seeing more and more >> closed bugzilla entries even to registered accounts. Is this part of >> trying to hide things from oracle? >> >> ______________________________**_________________ >> rhelv6-list mailing list >> rhelv6-list at redhat.com >> https://www.redhat.com/**mailman/listinfo/rhelv6-list >> > > > -- > +-----------------------------**[ robin at redhat.com ]----+ > | Robin Price II - RHCE,RHCDS,RHCVA | > | Inside Solutions Architect | > | Red Hat, Inc. | > | w: +1 (919) 754 4412 | > | c: +1 (252) 474 3525 | > | | > +---------[ http://people.redhat.com/**rprice]---------+ > > > ______________________________**_________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/**mailman/listinfo/rhelv6-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From linux at alteeve.com Fri Jan 13 21:28:59 2012 From: linux at alteeve.com (Digimer) Date: Fri, 13 Jan 2012 16:28:59 -0500 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <4F104883.6030406@redhat.com> Message-ID: <4F10A21B.5020301@alteeve.com> On 01/13/2012 04:22 PM, William Warren wrote: > That doesn't explain the sudden shift in marking BZ private by default > for an open source product. Ever since the current CEO came in RH has > been getting more and ore secretive and this goes against the spirit of > Open Source. There is a very serious responsibility placed on any company when receiving potentially private customer data. There is no conspiracy here, it's simple risk-management. If they leaked private data, even by accident, they could be sued and/or lose clients. Consider that Red Hat powers many very large institutions; Governments, banks, hospitals... This growing privacy issue is simply good risk management. As Robin mentioned in her email; Red Hat is going above and beyond what most any other company would do to keep their clients informed. Particularly when they know it's used largely by folks who are *not* customers at all. I've been bitten by following a trail only to hit a wall with a private BZ. It's frustrating, for sure. However, it's also understandable. You can always open a ticket and get any relevant information you need to solve your problem. Alternatively, if you are not a customer, you can wait until the issue is resolved and find the solution in the knowledge base. -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From Mc_Kiernan at Oeconomist.com Fri Jan 13 21:40:38 2012 From: Mc_Kiernan at Oeconomist.com (Mc Kiernan Daniel Kian) Date: Fri, 13 Jan 2012 13:40:38 -0800 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <4F104883.6030406@redhat.com> Message-ID: <4F10A4D6.5050901@Oeconomist.com> On 01/13/2012 01:22 PM, William Warren wrote: > That doesn't explain the sudden shift in marking BZ private by default > for an open source product. Ever since the current CEO came in RH has > been getting more and ore secretive and this goes against the spirit > of Open Source. Mr Arendt provided a perfectly sound protocol: ] I now always file a BZ first, and refer to it in the ] support case that I file immediately after. If it is observed that RH makes private bug reports filed in this manner, then you'll have a much better case for seeing malevolence. Until that time, let's assume that this is just an institution become more concerned with protecting those who file reports. From mail-lists at karan.org Sat Jan 14 09:27:44 2012 From: mail-lists at karan.org (Karanbir Singh) Date: Sat, 14 Jan 2012 09:27:44 +0000 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <4F104883.6030406@redhat.com> Message-ID: <4F114A90.6010608@karan.org> On 01/13/2012 09:22 PM, William Warren wrote: > That doesn't explain the sudden shift in marking BZ private by default > for an open source product. Ever since the current CEO came in RH has > been getting more and ore secretive and this goes against the spirit of > Open Source. is that really the case ? I see thousands of bz's against RHEL6 publicly visible. Some which were filed in the last day or so. What product do you see this against ? -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh ICQ: 2522219 | Yahoo IM: z00dax | Gtalk: z00dax GnuPG Key : http://www.karan.org/publickey.asc From gianluca.cecchi at gmail.com Sat Jan 14 14:34:29 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sat, 14 Jan 2012 15:34:29 +0100 Subject: [rhelv6-list] KVM issues post RHEL6-1->6.2 update In-Reply-To: References: <4F0759BE.9080904@redhat.com> Message-ID: possibly the answer is inside this thread, from a view of source change it appears inside: http://web.archiveorange.com/archive/v/8XiUW68vUOENyUOzpfSt at line [OP] Daniel P. Berrange Thu 12 May 2011 06:12:33 PM CEST " The migration protocol has support for a 'cookie' parameter which is an opaque array of bytes as far as libvirt is concerned. Drivers may use this for passing around arbitrary extra data they might need during migration. The QEMU driver needs to do a few things: - Pass hostname/uuid to allow strict protection against localhost migration attempts - Pass SPICE/VNC server port from the target back to the source to allow seamless relocation of client sessions - Pass lock driver state from source to destination This patch introduces the basic glue for handling cookies but only includes the host/guest UUID & name. " any way/option to bypass the parameter? Or any way to easily understand where in libvirt was added for rhel 6? Thanks Gianluca On Thu, Jan 12, 2012 at 5:56 PM, Gianluca Cecchi wrote: > Some additional notes: > If I freeze the vm related cluster service and on source 6.2 host I > manually run the virsh command that took place in 6.1 successfully, it > fails (as expected). This let me keep away considering the different > release versions of cluster related components between 6.1 and 6.2 (I > think...) > > # virsh migrate --live dacsmaster qemu+ssh://intrarhev2/system tcp:intrarhev2 > error: internal error missing hostuuid element in migration data > ... > So I presume there is something related to migration itself and > possibly different needed parameters in 6.2 shipped version of libvirt > (0.9.4 vs 0.8.7) From john.haxby at gmail.com Mon Jan 16 09:32:20 2012 From: john.haxby at gmail.com (John Haxby) Date: Mon, 16 Jan 2012 09:32:20 +0000 Subject: [rhelv6-list] Restricting bugzillas In-Reply-To: <4F114A90.6010608@karan.org> References: <4F0F50F6.5050407@emmanuelcomputerconsulting.com> <4F104883.6030406@redhat.com> <4F114A90.6010608@karan.org> Message-ID: On 14 January 2012 09:27, Karanbir Singh wrote: > On 01/13/2012 09:22 PM, William Warren wrote: > > That doesn't explain the sudden shift in marking BZ private by default > > for an open source product. Ever since the current CEO came in RH has > > been getting more and ore secretive and this goes against the spirit of > > Open Source. > > is that really the case ? I see thousands of bz's against RHEL6 publicly > visible. Some which were filed in the last day or so. What product do > you see this against ? > > The bugs you can see are, of course, the ones that are publicly viewable. However, pick, say, the most recent ones referred to by the kernel changelog and I'll bet that a large proportion of those aren't viewable. Likewise the bugs explicitly referred to from the corresponding errata. jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.haxby at gmail.com Mon Jan 16 09:36:25 2012 From: john.haxby at gmail.com (John Haxby) Date: Mon, 16 Jan 2012 09:36:25 +0000 Subject: [rhelv6-list] redhat bugzilla - marking bugz private In-Reply-To: <4F0F41A9.5080803@alteeve.com> References: <4F0F41A9.5080803@alteeve.com> Message-ID: On 12 January 2012 20:25, Digimer wrote: > > As a user, I admit it can be frustrating to follow a link to a closed > bug. However, I understand their position. When they identify a problem, > they generally create a public page explaining the fault, it's fix and > what versions of RPMs the problem was resolved in. Personally, I think > this is reasonable. > > It's reasonable from your point of view as the submitter of the bug. Unfortunately you are not the only one interested in the bug. For people in the open source community it's just frustrating. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.haxby at gmail.com Mon Jan 16 10:09:05 2012 From: john.haxby at gmail.com (John Haxby) Date: Mon, 16 Jan 2012 10:09:05 +0000 Subject: [rhelv6-list] sudoers file In-Reply-To: References: Message-ID: On 11 January 2012 23:57, Kaushal Shriyan wrote: > > Can someone please point me to documentation to setup sudoers in RHEL 6 ? > > It's already been done: $ man -k sudo sudo (8) - execute a command as another user sudo [sudoedit] (8) - execute a command as another user sudoedit (8) - execute a command as another user sudoedit [sudo] (8) - execute a command as another user sudoers (5) - list of which users may execute what sudoers.ldap [sudoers] (5) - sudo LDAP configuration sudoreplay (8) - replay sudo session logs visudo (8) - edit the sudoers file You probably want sudoers(5). jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From linux at alteeve.com Mon Jan 16 14:07:06 2012 From: linux at alteeve.com (Digimer) Date: Mon, 16 Jan 2012 09:07:06 -0500 Subject: [rhelv6-list] redhat bugzilla - marking bugz private In-Reply-To: References: <4F0F41A9.5080803@alteeve.com> Message-ID: <4F142F0A.4040500@alteeve.com> On 01/16/2012 04:36 AM, John Haxby wrote: > > > On 12 January 2012 20:25, Digimer > wrote: > > > As a user, I admit it can be frustrating to follow a link to a closed > bug. However, I understand their position. When they identify a problem, > they generally create a public page explaining the fault, it's fix and > what versions of RPMs the problem was resolved in. Personally, I think > this is reasonable. > > > It's reasonable from your point of view as the submitter of the bug. > Unfortunately you are not the only one interested in the bug. For > people in the open source community it's just frustrating. Excuse me, but I never indicated that I was the submitter of the bug. -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron From gianluca.cecchi at gmail.com Wed Jan 18 10:19:17 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 18 Jan 2012 11:19:17 +0100 Subject: [rhelv6-list] KVM issues post RHEL6-1->6.2 update In-Reply-To: References: <4F0759BE.9080904@redhat.com> Message-ID: Just to confirm that upgrading another node to rh el 6.2, I'm able to migrate 6.2 to 6.2 hosted vm. Summarizing, starting from environment like this: 6.1 hosts qemu-kvm-0.12.1.2-2.160.el6_1.8.x86_64 libvirt-0.8.7-18.el6_1.1.x86_64 kernel-2.6.32-131.17.1.el6.x86_64 6.2 hosts qemu-kvm-0.12.1.2-2.209.el6_2.1.x86_64 libvirt-0.9.4-23.el6_2.1.x86_64 kernel-2.6.32-220.2.1.el6.x86_64 symbol "--->" corresponds to live migration 6.1 --> 6.1 ok 6.1 --> 6.2 ok 6.2 --> 6.2 ok but 6.2 --> 6.1 ko Gianluca From imusayev at webmd.net Fri Jan 20 19:12:03 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Fri, 20 Jan 2012 14:12:03 -0500 Subject: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug In-Reply-To: <4F0F3EA4.2090202@cisco.com> References: <4F0F2922.8060303@cisco.com> <4F0F3EA4.2090202@cisco.com> Message-ID: So I broke the raid and used a single partition for iozone testing with 5000mb chunks. No errors reported. It's a combo of EXT4 and mdraid module that cause soft lockups. :( Benchmarking results to be released at a later time. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Brian Long Sent: Thursday, January 12, 2012 3:12 PM To: rhelv6-list at redhat.com Subject: Re: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug I responded too quickly. I thought it was finished and it is still running. I thought RHEL used cfq by default. /Brian/ On 1/12/12 2:54 PM, Musayev, Ilya wrote: > I guess I can break the raid and try again on a single drive. I will let you know what happens. > > Did you actually do 5000MB test with iozone? > > My 100MB and 1000MB are fine, only when I go into larger 5000MB range with iozone is when I start having issues. I could probably narrow it down and find the optimal break point, but I think it should not matter - as this should not happen altogether and does not occur with XFS. At this point, I'm leaning more toward XFS as I get better or on par metrics of EXT4 without any issues. > > I'm also curious as to why your IO scheduler was set to cfq, if I recall correctly - noop should have been default. > > -----Original Message----- > From: rhelv6-list-bounces at redhat.com > [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Brian Long > Sent: Thursday, January 12, 2012 1:41 PM > To: rhelv6-list at redhat.com > Subject: Re: [rhelv6-list] RHEL6.2 Kernel/EXT4 bug > > On 1/12/12 12:14 PM, Musayev, Ilya wrote: >> Curious if anyone has seen this in their RHEL6.2 setups, if you have >> 6.1 or 6.2 please try this out and see what happens. List of commands >> to reproduce is below, latest iozone required. >> >> >> >> https://bugzilla.redhat.com/show_bug.cgi?id=773377 > > I put the same kernel on my RH 6.2 workstation with a single drive and ran iozone with the same parameters. I don't have the drive mirrored and I had to change the scheduler to noop since it was cfq by default. > > The only partition I had with enough free space is encrypted, so kcryptd was taking 100% CPU while running iozone. Have you narrowed it down to md-only? What happens if you run the same test on just one of your drives? > > I got a kernel oops early on, but no ext4 errors: > Jan 12 12:51:26 brilong-lnx2 kernel: ------------[ cut here > ]------------ Jan 12 12:51:26 brilong-lnx2 kernel: WARNING: at > kernel/sched.c:5914 > thread_return+0x232/0x79d() (Not tainted) Jan 12 12:51:26 brilong-lnx2 > kernel: Hardware name: IBM System x3200 > -[4362PAY]- > Jan 12 12:51:26 brilong-lnx2 kernel: Modules linked in: autofs4 sunrpc > cpufreq_ondemand acpi_cpufreq freq_table mperf ipv6 ipt_REJECT > nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack iptable_filter > ip_tables sha256_generic cryptd aes_x86_64 aes_generic cbc dm_crypt > uinput sg microcode serio_raw i2c_i801 iTCO_wdt iTCO_vendor_support > tg3 i3000_edac edac_core ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod > cdrom pata_acpi ata_generic ata_piix radeon ttm drm_kms_helper drm > i2c_algo_bit i2c_core dm_mirror dm_region_hash dm_log dm_mod [last > unloaded: scsi_wait_scan] > Jan 12 12:51:26 brilong-lnx2 kernel: Pid: 23, comm: kblockd/1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 Jan 12 12:51:26 brilong-lnx2 kernel: Call Trace: > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > warn_slowpath_common+0x87/0xc0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > warn_slowpath_null+0x1a/0x20 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > thread_return+0x232/0x79d > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > blk_unplug_work+0x0/0x70 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > blk_unplug_work+0x0/0x70 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > worker_thread+0x1fc/0x2a0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > autoremove_wake_function+0x0/0x40 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > worker_thread+0x0/0x2a0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > kthread+0x96/0xa0 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > child_rip+0xa/0x20 > Jan 12 12:51:26 brilong-lnx2 kernel: [] ? kthread+0x0/0xa0 Jan 12 12:51:26 brilong-lnx2 kernel: [] ? > child_rip+0x0/0x20 > Jan 12 12:51:26 brilong-lnx2 kernel: ---[ end trace aeef27db2e12775f > ]--- > > /Brian/ -- Brian Long | | Corporate Security Programs Org . | | | . | | | . ' ' C I S C O _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From derek at umiacs.umd.edu Mon Jan 23 20:30:35 2012 From: derek at umiacs.umd.edu (Derek Yarnell) Date: Mon, 23 Jan 2012 15:30:35 -0500 Subject: [rhelv6-list] RHEL6 debuginfo rpms Message-ID: <4F1DC36B.2010908@umiacs.umd.edu> Am I crazy or are these gone? Were they relocated? ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/x86_64/Debuginfo Trying to mitigate https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2012-0056 using systemtap. But need the debuginfo RPMs. Thanks, derek -- --- Derek T. Yarnell University of Maryland Institute for Advanced Computer Studies From amyagi at gmail.com Mon Jan 23 20:46:28 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Mon, 23 Jan 2012 12:46:28 -0800 Subject: [rhelv6-list] RHEL6 debuginfo rpms In-Reply-To: <4F1DC36B.2010908@umiacs.umd.edu> References: <4F1DC36B.2010908@umiacs.umd.edu> Message-ID: On Mon, Jan 23, 2012 at 12:30 PM, Derek Yarnell wrote: > Am I crazy or are these gone? ?Were they relocated? > > ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/x86_64/Debuginfo > > Trying to mitigate > https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2012-0056 using > systemtap. ?But need the debuginfo RPMs. > > Thanks, > derek This KB article has relevant info regarding how to get debuginfo: https://access.redhat.com/kb/docs/DOC-50082 Akemi From amyagi at gmail.com Tue Jan 24 09:03:13 2012 From: amyagi at gmail.com (Akemi Yagi) Date: Tue, 24 Jan 2012 01:03:13 -0800 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: <4F07276C.1010501@redhat.com> Message-ID: On Fri, Jan 6, 2012 at 8:55 AM, Akemi Yagi wrote: > On Fri, Jan 6, 2012 at 8:55 AM, Robin Price II wrote: >> Bugzilla: ?https://bugzilla.redhat.com/show_bug.cgi?id=765720 >> >> This is private due to private information from customer use cases. ?If you >> need further details, I would highly encourage you to contact Red Hat >> support or your TAM. >> >> Here is the initial information opened in the BZ: >> >> "The following patch is in urgent fix for Linus branch, which avoid the >> unnecessary overflow in sched_clock otherwise kernel will crash after >> 209~250 days. >> >> http://git.kernel.org/?p=linux/kernel/git/tip/tip.git;a=patch;h=4cecf6d401a01d054afc1e5f605bcbfe553cb9b9 >> >> In hundreds of days, the __cycles_2_ns calculation in sched_clock >> has an overflow. ?cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing the >> final value to become zero. ?We can solve this without losing any precision. >> We can decompose TSC into quotient and remainder of division by the scale >> factor, and then use this to convert TSC into nanoseconds." >> >> ~rp > > Thank you for this post to let us know that Red Hat is now taking care > of this issue. Just a note to add that there is a KB article for this issue: https://access.redhat.com/kb/docs/DOC-69254 "sched_clock() overflow after 208.5 days in Linux Kernel" Akemi From imusayev at webmd.net Tue Jan 24 17:40:30 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Tue, 24 Jan 2012 12:40:30 -0500 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: <4F07276C.1010501@redhat.com> Message-ID: Akemi, Which kernels are affected? I'm about to go large on latest 6.2 kernel and curious if I need to wait until this bug is resolved. I also see that for non-vmware servers I can use "notsc", can this be done online? -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Akemi Yagi Sent: Tuesday, January 24, 2012 4:03 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days On Fri, Jan 6, 2012 at 8:55 AM, Akemi Yagi wrote: > On Fri, Jan 6, 2012 at 8:55 AM, Robin Price II wrote: >> Bugzilla: ?https://bugzilla.redhat.com/show_bug.cgi?id=765720 >> >> This is private due to private information from customer use cases. ? >> If you need further details, I would highly encourage you to contact >> Red Hat support or your TAM. >> >> Here is the initial information opened in the BZ: >> >> "The following patch is in urgent fix for Linus branch, which avoid >> the unnecessary overflow in sched_clock otherwise kernel will crash >> after >> 209~250 days. >> >> http://git.kernel.org/?p=linux/kernel/git/tip/tip.git;a=patch;h=4cecf >> 6d401a01d054afc1e5f605bcbfe553cb9b9 >> >> In hundreds of days, the __cycles_2_ns calculation in sched_clock has >> an overflow. ?cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing the >> final value to become zero. ?We can solve this without losing any precision. >> We can decompose TSC into quotient and remainder of division by the >> scale factor, and then use this to convert TSC into nanoseconds." >> >> ~rp > > Thank you for this post to let us know that Red Hat is now taking care > of this issue. Just a note to add that there is a KB article for this issue: https://access.redhat.com/kb/docs/DOC-69254 "sched_clock() overflow after 208.5 days in Linux Kernel" Akemi _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From imusayev at webmd.net Tue Jan 24 17:54:13 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Tue, 24 Jan 2012 12:54:13 -0500 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: <4F07276C.1010501@redhat.com> Message-ID: I know I can change clock source by pushing value into /sys/devices/system/clocksource/clocksource0/current_clocksource, i don't believe I can push "notsc" online (dont see it as an option). I guess grub is my only way togo. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Musayev, Ilya Sent: Tuesday, January 24, 2012 12:41 PM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days Importance: High Akemi, Which kernels are affected? I'm about to go large on latest 6.2 kernel and curious if I need to wait until this bug is resolved. I also see that for non-vmware servers I can use "notsc", can this be done online? -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Akemi Yagi Sent: Tuesday, January 24, 2012 4:03 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days On Fri, Jan 6, 2012 at 8:55 AM, Akemi Yagi wrote: > On Fri, Jan 6, 2012 at 8:55 AM, Robin Price II wrote: >> Bugzilla: ?https://bugzilla.redhat.com/show_bug.cgi?id=765720 >> >> This is private due to private information from customer use cases. >> If you need further details, I would highly encourage you to contact >> Red Hat support or your TAM. >> >> Here is the initial information opened in the BZ: >> >> "The following patch is in urgent fix for Linus branch, which avoid >> the unnecessary overflow in sched_clock otherwise kernel will crash >> after >> 209~250 days. >> >> http://git.kernel.org/?p=linux/kernel/git/tip/tip.git;a=patch;h=4cecf >> 6d401a01d054afc1e5f605bcbfe553cb9b9 >> >> In hundreds of days, the __cycles_2_ns calculation in sched_clock has >> an overflow. ?cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing the >> final value to become zero. ?We can solve this without losing any precision. >> We can decompose TSC into quotient and remainder of division by the >> scale factor, and then use this to convert TSC into nanoseconds." >> >> ~rp > > Thank you for this post to let us know that Red Hat is now taking care > of this issue. Just a note to add that there is a KB article for this issue: https://access.redhat.com/kb/docs/DOC-69254 "sched_clock() overflow after 208.5 days in Linux Kernel" Akemi _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From Tim.GOLLSCHEWSKY at suncorp.com.au Tue Jan 24 22:17:41 2012 From: Tim.GOLLSCHEWSKY at suncorp.com.au (GOLLSCHEWSKY, Tim) Date: Tue, 24 Jan 2012 22:17:41 +0000 Subject: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days In-Reply-To: References: <4F07276C.1010501@redhat.com> Message-ID: <534CE82FFF845C43BE2C788D952CD376124457CC@PBNEMBMSX4112.int.Corp.sun> Think a lot of people would like to know specifically which kernel version(s) are affected. It would be great if that was included in the KB article. Does this only affect RHEL6.1? I have RHEL6.0 machines with uptime > 208.5 days: $ cat /etc/redhat-release; uname -r; uptime Red Hat Enterprise Linux Server release 6.0 (Santiago) 2.6.32-71.14.1.el6.x86_64 08:15:38 up 238 days, 15:50, 1 user, load average: 0.00, 0.00, 0.00 Yet they meet the vulnerability requirements in that KB article (so far, with no kernel information included): $ dmesg | grep tsc Switching to clocksource tsc $ cat /proc/cpuinfo | grep flags flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida So does this mean kernel 2.6.32-71.14.1.el6.x86_64 is not affected? Cheers, Tim. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Musayev, Ilya Sent: Wednesday, 25 January 2012 3:41 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days Importance: High Akemi, Which kernels are affected? I'm about to go large on latest 6.2 kernel and curious if I need to wait until this bug is resolved. I also see that for non-vmware servers I can use "notsc", can this be done online? -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Akemi Yagi Sent: Tuesday, January 24, 2012 4:03 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] A kernel bug that causes a system crash when the uptime is longer than 208.5 days On Fri, Jan 6, 2012 at 8:55 AM, Akemi Yagi wrote: > On Fri, Jan 6, 2012 at 8:55 AM, Robin Price II wrote: >> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=765720 >> >> This is private due to private information from customer use cases. >> If you need further details, I would highly encourage you to contact >> Red Hat support or your TAM. >> >> Here is the initial information opened in the BZ: >> >> "The following patch is in urgent fix for Linus branch, which avoid >> the unnecessary overflow in sched_clock otherwise kernel will crash >> after >> 209~250 days. >> >> http://git.kernel.org/?p=linux/kernel/git/tip/tip.git;a=patch;h=4cecf >> 6d401a01d054afc1e5f605bcbfe553cb9b9 >> >> In hundreds of days, the __cycles_2_ns calculation in sched_clock has >> an overflow. cyc * per_cpu(cyc2ns, cpu) exceeds 64 bits, causing the >> final value to become zero. We can solve this without losing any precision. >> We can decompose TSC into quotient and remainder of division by the >> scale factor, and then use this to convert TSC into nanoseconds." >> >> ~rp > > Thank you for this post to let us know that Red Hat is now taking care > of this issue. Just a note to add that there is a KB article for this issue: https://access.redhat.com/kb/docs/DOC-69254 "sched_clock() overflow after 208.5 days in Linux Kernel" Akemi ________________________________ This e-mail is sent by Suncorp Group Limited ABN 66 145 290 124 or one of its related entities "Suncorp". Suncorp may be contacted at Level 18, 36 Wickham Terrace, Brisbane or on 13 11 55 or at suncorp.com.au. The content of this e-mail is the view of the sender or stated author and does not necessarily reflect the view of Suncorp. The content, including attachments, is a confidential communication between Suncorp and the intended recipient. If you are not the intended recipient, any use, interference with, disclosure or copying of this e-mail, including attachments, is unauthorised and expressly prohibited. If you have received this e-mail in error please contact the sender immediately and delete the e-mail and any attachments from your system. From fgozalo0 at alumno.uned.es Thu Jan 26 09:56:24 2012 From: fgozalo0 at alumno.uned.es (Fernando Gozalo) Date: Thu, 26 Jan 2012 10:56:24 +0100 Subject: [rhelv6-list] Install an older kernel Message-ID: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Hello, I need to install kernel version 2.6.32-131.21.1.el6.x86_64. ?Do you know how can I do this? Thanks, Fernando. From Mc_Kiernan at Oeconomist.com Thu Jan 26 10:05:41 2012 From: Mc_Kiernan at Oeconomist.com (Mc Kiernan Daniel Kian) Date: Thu, 26 Jan 2012 02:05:41 -0800 Subject: [rhelv6-list] Install an older kernel In-Reply-To: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Message-ID: <4F212575.2060402@Oeconomist.com> On 01/26/2012 01:56 AM, Fernando Gozalo wrote: > I need to install kernel version 2.6.32-131.21.1.el6.x86_64. > ?Do you know how can I do this? Is there a version of yum-allowdowngrade for your system? From Mc_Kiernan at Oeconomist.com Thu Jan 26 10:46:54 2012 From: Mc_Kiernan at Oeconomist.com (Mc Kiernan Daniel Kian) Date: Thu, 26 Jan 2012 02:46:54 -0800 Subject: [rhelv6-list] Install an older kernel In-Reply-To: <4F212575.2060402@Oeconomist.com> References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> <4F212575.2060402@Oeconomist.com> Message-ID: <4F212F1E.7060507@Oeconomist.com> On 01/26/2012 02:05 AM, Mc Kiernan Daniel Kian wrote: > On 01/26/2012 01:56 AM, Fernando Gozalo wrote: > >> I need to install kernel version 2.6.32-131.21.1.el6.x86_64. >> ?Do you know how can I do this? > > Is there a version of yum-allowdowngrade for your system? (More recent versions of yum have a "downgrade" option.) From john.haxby at gmail.com Thu Jan 26 12:21:47 2012 From: john.haxby at gmail.com (John Haxby) Date: Thu, 26 Jan 2012 12:21:47 +0000 Subject: [rhelv6-list] Install an older kernel In-Reply-To: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Message-ID: On 26 January 2012 09:56, Fernando Gozalo wrote: > > > I need to install kernel version 2.6.32-131.21.1.el6.x86_64. > ?Do you know how can I do this? > > This usually works for me: yum install kernel-2.6.32-131.21.1.el6 jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From fgozalo0 at alumno.uned.es Thu Jan 26 13:54:00 2012 From: fgozalo0 at alumno.uned.es (Fernando Gozalo) Date: Thu, 26 Jan 2012 14:54:00 +0100 Subject: [rhelv6-list] Install an older kernel In-Reply-To: <4F212F1E.7060507@Oeconomist.com> References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> <4F212575.2060402@Oeconomist.com> <4F212F1E.7060507@Oeconomist.com> Message-ID: > On 01/26/2012 02:05 AM, Mc Kiernan Daniel Kian wrote: > >> On 01/26/2012 01:56 AM, Fernando Gozalo wrote: >> >>> I need to install kernel version 2.6.32-131.21.1.el6.x86_64. >>> ?Do you know how can I do this? >> >> Is there a version of yum-allowdowngrade for your system? > > (More recent versions of yum have a "downgrade" option.) I already tried this with this result: # yum downgrade kernel Loaded plugins: product-id, rhnplugin, subscription-manager Updating certificate-based repositories. Setting up Downgrade Process Package kernel-2.6.32-71.el6.x86_64 is allowed multiple installs, skipping Nothing to do # From fgozalo0 at alumno.uned.es Thu Jan 26 13:55:28 2012 From: fgozalo0 at alumno.uned.es (Fernando Gozalo) Date: Thu, 26 Jan 2012 14:55:28 +0100 Subject: [rhelv6-list] Install an older kernel In-Reply-To: References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Message-ID: > On 26 January 2012 09:56, Fernando Gozalo wrote: > >> >> >> I need to install kernel version 2.6.32-131.21.1.el6.x86_64. >> ?Do you know how can I do this? >> >> > > This usually works for me: > > yum install kernel-2.6.32-131.21.1.el6 This worked, thanks. Fernando. From imusayev at webmd.net Thu Jan 26 17:01:32 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 26 Jan 2012 12:01:32 -0500 Subject: [rhelv6-list] Install an older kernel In-Reply-To: References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Message-ID: Speaking of downgrades, can you downgrade the entire system to an older state? Example, I'm running 5.7 stock ISO install. After sometime, I've upgraded to 5.7 latest RPMs using my local yum, for some reason - something did not go well, can I downgrade everything to stock ISO 5.7? Can I go lower from 5.7 to 5.4? Thanks ilya -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Fernando Gozalo Sent: Thursday, January 26, 2012 8:55 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] Install an older kernel > On 26 January 2012 09:56, Fernando Gozalo wrote: > >> >> >> I need to install kernel version 2.6.32-131.21.1.el6.x86_64. >> ?Do you know how can I do this? >> >> > > This usually works for me: > > yum install kernel-2.6.32-131.21.1.el6 This worked, thanks. Fernando. _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From john.haxby at gmail.com Thu Jan 26 17:14:04 2012 From: john.haxby at gmail.com (John Haxby) Date: Thu, 26 Jan 2012 17:14:04 +0000 Subject: [rhelv6-list] Install an older kernel In-Reply-To: References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Message-ID: On 26 January 2012 17:01, Musayev, Ilya wrote: > Speaking of downgrades, can you downgrade the entire system to an older > state? Example, I'm running 5.7 stock ISO install. After sometime, I've > upgraded to 5.7 latest RPMs using my local yum, for some reason - something > did not go well, can I downgrade everything to stock ISO 5.7? Can I go > lower from 5.7 to 5.4? > > Unless there's something I've missed completely: no. Individual RPMs often can have their older version installed, but that's not true in general. You're more likely to be able to downgrade within a release, but across releases, that's less likely. jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From imusayev at webmd.net Thu Jan 26 22:05:13 2012 From: imusayev at webmd.net (Musayev, Ilya) Date: Thu, 26 Jan 2012 17:05:13 -0500 Subject: [rhelv6-list] Install an older kernel In-Reply-To: References: <784924b0c69df6080c44e92a5d49e249.squirrel@correo.uned.es> Message-ID: John, I was under impression major releases is a problem, minor should be OK. I think I need to test it. Thanks for the response, Regards ilya From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of John Haxby Sent: Thursday, January 26, 2012 12:14 PM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] Install an older kernel On 26 January 2012 17:01, Musayev, Ilya > wrote: Speaking of downgrades, can you downgrade the entire system to an older state? Example, I'm running 5.7 stock ISO install. After sometime, I've upgraded to 5.7 latest RPMs using my local yum, for some reason - something did not go well, can I downgrade everything to stock ISO 5.7? Can I go lower from 5.7 to 5.4? Unless there's something I've missed completely: no. Individual RPMs often can have their older version installed, but that's not true in general. You're more likely to be able to downgrade within a release, but across releases, that's less likely. jch -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfairchild at brocku.ca Fri Jan 27 15:29:48 2012 From: cfairchild at brocku.ca (Cale Fairchild) Date: Fri, 27 Jan 2012 10:29:48 -0500 Subject: [rhelv6-list] Stale NFS file handle Message-ID: <4F22C2EC.50909@brocku.ca> I have been having issues with NFS file handles since upgrading to 6.2 and was wondering if anyone else has noticed strange behaviour? I have searched Bugzilla and nothing came up so I turn to the list. It seems to have started after installing nfs-utils-1.2.3-15.el6 (from 6.2 upgrade) running on both kernel-2.6.32-220.2.1.el6 and kernel-2.6.32-220.4.1.el6. When the server reboots, all the connected NFS clients seem to have stale NFS handles afterwards. Some clients even report a stale NFS handle after they themselves have been restarted. I do not believe that any of the NFS nor firewall configurations have changed since I last restarted with the 6.1 kernel with no issues. Since this is not a test server (and I do not have one running 6.2 yet) I wanted to throw the query at the list to see if others have seen this behaviour before opening a bugzilla report, as I will not be able to provide much testing or debugging on it right now. Thanks for any feedback that anyone can give (even feedback saying that your NFS is working just fine since the upgrade, then I can start looking deeper into my configurations). -- Cale Fairchild Systems Administrator Computer Science Brock University cfairchild at brocku.ca Confidentiality Notice: This e-mail, including any attachments, may contain confidential or privileged information. If you are not the intended recipient, please notify the sender by e-mail and immediately delete this message and its contents. Thank you. From kmazurek at neotek.waw.pl Mon Jan 30 22:59:39 2012 From: kmazurek at neotek.waw.pl (Krzysztof Mazurek) Date: Mon, 30 Jan 2012 23:59:39 +0100 Subject: [rhelv6-list] YUM/RPM rollback installation. Message-ID: <4F2720DB.8070702@neotek.waw.pl> Hi there!! Is there anyone anyone using rpm rollback feature on RHEL 6.x? for example - like described in: http://blog.chris.tylers.info/index.php?/archives/17-How-to-Rollback-Package-UpdatesInstallation-on-Fedora.html Does it work? Krzysztof. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2321 bytes Desc: Kryptograficzna sygnatura S/MIME URL: From rhelv6-list at redhat.com Tue Jan 31 17:20:13 2012 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Tue, 31 Jan 2012 12:20:13 -0500 Subject: [rhelv6-list] Announcement: Red Hat Enterprise Linux Life Cycle Extended to Ten Years Message-ID: <4F2822CD.6000709@redhat.com> Today Red Hat is pleased to announce that it has extended the life cycle of Red Hat Enterprise Linux 5, 6 and future releases from seven to 10 years, effective immediately. This announcement is in response to the widespread adoption of Red Hat Enterprise Linux 5 since its introduction in 2007, and the increasing rate of adoption of Red Hat Enterprise Linux 6 since its launch in 2010. During the life cycle of Red Hat Enterprise Linux, customers take advantage of a multitude of benefits, including feature enhancements, critical bug and security fixes, as well as award-winning support from Red Hat?s Global Support Services team. Customers also enjoy stability from Red Hat Enterprise Linux resulting from Red Hat's commitment to ABI and API compatibility during the life cycle. Finally, Red Hat delivers a steady stream of supported hardware platforms, aligning to the new introduction cycles of hardware OEM partners. The result of the extended life cycle is that customers will enjoy all of the benefits of their subscription over a longer period of time. Specifically, this means additional time to take advantage of the significant investments that customers make in Red Hat Enterprise Linux related to their business critical applications. Enterprise customers will now have more options for their Red Hat Enterprise Linux implementations, and can use the longer life cycle to plan for migrations as well as new deployments. We are excited about this announcement, and in particular about the additional value that it provides to customers, in response to their reliance on Red Hat to help run and grow their business. For additional details, please refer to the following resources: - The press release associated with this announcement: http://www.redhat.com/about/news/press-archive/2012/1/red-hat-enterprise-linux-stability-drives-demand-for-more-flexibility-in-long-term-operating-system-deployments - The Red Hat Enterprise Linux life cycle web page: https://access.redhat.com/support/policy/updates/errata/ - An FAQ on the Red Hat Customer Portal: https://access.redhat.com/kb/docs/DOC-69647 Please consult your primary Red Hat contact to understand more detail regarding this exciting announcement.