From marco.shaw at gmail.com Sat Sep 1 14:22:30 2012 From: marco.shaw at gmail.com (Marco Shaw) Date: Sat, 1 Sep 2012 11:22:30 -0300 Subject: [rhelv6-list] Going from "good" to "expert" In-Reply-To: References: Message-ID: (Also sent to redhat-list at redhat.com, but it might not be so active with "current experts" anymore...) I've been playing with Linux for roughly 12 years. I feel I'm "good" at it, but I lack "real advanced skills". I've completely skipped the whole SELinux-wave. I would have been forced to learn it, but my co-workers and employers were too intimidated by it, so I never bothered. I also skipped really trying to understand SystemTap, and that's another (perhaps) advanced skill, that would make me truly a "one of a kind" based on what I know my immediate co-workers know about Linux. Anyone have any good tips/references to how to take my skills to the next level? I've seen a few half-decent mini-books like "Vmware interview questions" that may seem stupid, but actually do provide a few things that I think I need to brush up on. Marco From gianluca.cecchi at gmail.com Sat Sep 1 14:52:56 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sat, 1 Sep 2012 16:52:56 +0200 Subject: [rhelv6-list] Going from "good" to "expert" In-Reply-To: References: Message-ID: On Sat, Sep 1, 2012 at 4:22 PM, Marco Shaw wrote: > (Also sent to redhat-list at redhat.com, but it might not be so active > with "current experts" anymore...) It;s better not to cross post whenever possible... > I've completely skipped the whole SELinux-wave. I would have been > forced to learn it, but my co-workers and employers were too > intimidated by it, so I never bothered. I also skipped really trying > to understand SystemTap, and that's another (perhaps) advanced skill, > that would make me truly a "one of a kind" based on what I know my > immediate co-workers know about Linux. > > Anyone have any good tips/references to how to take my skills to the > next level? Not sure to ask your question, but a good starting point would be Red Hat Documentation main page: https://access.redhat.com/knowledge/docs/Red_Hat_Enterprise_Linux/ Some of RHEL 6 guides: Security-Enhanced Linux Managing Confined Services (with SELinux) SystemTap Beginners Guide SystemTap Tapset Reference and there are many others related to Virtualization, Clustering, Resource Mgmt (Control groups), Identity Mgmt... Tipically they are a very good starting point and are well written. Don't forget that you can also submit a Documentation Bug, so that you can participate in improving documentation itself for others Then you can buy for example JBoss Developer subscription for less than 100$ / year and experiment (it includes the OS and RHN access): https://www.redhat.com/apps/store/developers/jboss_developer_studio.html Or go through CentOS if the budget is a constraint for you or you want to try all the possible technological solutions (Cluster, Cluster Storage, XFS, ecc..). HIH, Gianluca From akrherz at iastate.edu Mon Sep 3 14:28:38 2012 From: akrherz at iastate.edu (Daryl Herzmann) Date: Mon, 3 Sep 2012 09:28:38 -0500 Subject: [rhelv6-list] Nasty bug with writing to resyncing RAID-5 Array In-Reply-To: <1481619669.26160.1345140500532.JavaMail.root@email.gat.com> References: <1481619669.26160.1345140500532.JavaMail.root@email.gat.com> Message-ID: On Thu, Aug 16, 2012 at 1:08 PM, David C. Miller wrote: > > > ----- Original Message ----- >> From: "Daryl Herzmann" >> To: "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" >> Sent: Wednesday, August 15, 2012 7:32:15 AM >> Subject: Re: [rhelv6-list] Nasty bug with writing to resyncing RAID-5 Array >> >> On Sun, Jun 24, 2012 at 12:48 PM, Stephen John Smoogen >> wrote: >> > On 23 June 2012 11:04, Daryl Herzmann wrote: >> >> On Fri, Jun 22, 2012 at 4:03 PM, Stephen John Smoogen >> >> wrote: >> >>> On 22 June 2012 14:10, daryl herzmann >> >>> wrote: >> >>>> Howdy, >> >>>> >> >>>> The RHEL6.3 release notes have a curious entry: >> >>>> >> >>>> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.3_Technical_Notes/kernel_issues.html >> >>>> >> >>>> kernel component >> >>>> >> >>>> Due to a race condition, in certain cases, writes to RAID4/5/6 >> >>>> while the >> >>>> array is reconstructing could hang the system >> >>>> >> >>>> Wow, I am reproducing it frequently here. Simply have a RAID-5 >> >>>> software >> >>>> array and do some write IO to it, eventually things start >> >>>> hanging and the >> >>>> power button needs to be pressed. >> >>>> >> >>>> Oh man. >> >>> >> >>> Well the race condition they are mentioning should only happen >> >>> when >> >>> the RAID array is reconstructing. This sounds like a different >> >>> bug/problem. What kind of disks, type of RAID etc. >> >> >> >> Thanks for the response. I am not sure of the difference between >> >> 'reconstructing' and 'resyncing' and/or 'syncing'. The >> >> reproducing >> >> case was quite easy for me. >> >> >> >> 1. Create a software raid5 >> >> 2. Immediately then create a filesystem on this raid5, while init >> >> sync underway >> >> 3. IO to the RAID device eventually stops, even for the software >> >> raid5 sync >> > >> > Ok reconstructing is where the initial RAID drives pair up with >> > each >> > other. Resyncing I believe is where a RAID which has been created >> > is >> > putting the data across its raid. Basic cat /proc/mdstat.. if there >> > is >> > a line ====> then you are reconstructing the disk array. In the >> > example you give above, the disks would be reconstructing >> > >> > So the next thing to do is why you are able to trigger it >> > constantly. >> > That may be due to >> > CPU Type: >> > RAM Amount: >> > Disk controllers: >> > DIsk types (SATA, SAS, SCSI, PATA): >> > RAID type: >> > RAID layout (same controller, different controller, etc): >> >> I don't seem to have much issue reproducing, I just had another >> machine do it this morning. Nehalem processor, 12 GB ram, Dell >> PowerEdge T400, Perc 6i controller, software raid 5, Seagate 2 TB >> Barracuda drives... >> >> Does anybody have the bugzilla ticket associated with this or perhaps >> a knowledge base article on it? >> >> daryl >> > > I would like to know too. I have not seen this issue yet but I do have some large RAID6 arrays. The private bugzilla tracking this is: https://bugzilla.redhat.com/show_bug.cgi?id=828065 It appears the hope is to resolve this for the RHEL6.4 release. daryl From pbonzini at redhat.com Mon Sep 3 12:22:30 2012 From: pbonzini at redhat.com (Paolo Bonzini) Date: Mon, 03 Sep 2012 14:22:30 +0200 Subject: [rhelv6-list] How do I install glibc_2.14 on rhel6? In-Reply-To: References: <502AE6BF.6080302@nobugconsulting.ro> Message-ID: <5044A106.6040909@redhat.com> Il 15/08/2012 14:40, Mirko Vukovic ha scritto: >> > There are at least 2 options here >> > - persuade the maintainer of the EPEL package to update it > But the maintainer of EPEL sbcl would still need a new version of glibc, right? No, it won't. If that is the case, the build would fail, but it's unlikely. It is possible that some functionality will get disabled in the EPEL build, corresponding to whatever functionality of glibc 2.14 is used in the Fedora build. For example, some system call that is first exported in glibc 2.14 may not be usable from Lisp programs. Paolo From mirko.vukovic at gmail.com Thu Sep 6 13:08:46 2012 From: mirko.vukovic at gmail.com (Mirko Vukovic) Date: Thu, 6 Sep 2012 09:08:46 -0400 Subject: [rhelv6-list] How to add a new browser bookmark to be visible to new accounts Message-ID: Hello, When I start Firefox, it comes up with several (RedHat related) bookmarks on the bookmarks bar. I would like to add another one that will be automatically visible to all new accounts. I've read that Firefox stores bookmarks in ~/.mozilla/.../bookmarks.html, but have not found that file in /etc/skel. And I could not figure out how to ask Google the right question. Thanks, Mirko -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbrown at divms.uiowa.edu Thu Sep 6 13:53:12 2012 From: hbrown at divms.uiowa.edu (Hugh Brown) Date: Thu, 06 Sep 2012 08:53:12 -0500 Subject: [rhelv6-list] How to add a new browser bookmark to be visible to new accounts In-Reply-To: References: Message-ID: <5048AAC8.9030203@divms.uiowa.edu> On 09/06/2012 08:08 AM, Mirko Vukovic wrote: > Hello, > > When I start Firefox, it comes up with several (RedHat related) bookmarks > on the bookmarks bar. I would like to add another one that will be > automatically visible to all new accounts. > > I've read that Firefox stores bookmarks in ~/.mozilla/.../bookmarks.html, > but have not found that file in /etc/skel. And I could not figure out how > to ask Google the right question. > > Thanks, > > Mirko > The default bookmarks are in /usr/lib64/firefox/omni.ja (which is really just a specially optimized jar file). If you jar xf omni.ja, it expands to include defaults/profile/bookmarks.html (among lots of other things) that includes all of Redhat's default bookmarks. You could try creating a /usr/lib64/firefox/defaults/profile/bookmarks.html and see if that works (though I'm guessing it will be ignored). Otherwise, you get to figure out how to generate omni.ja file so that firefox likes it. Memory says that it is not as simple as expand and re-create, but it's been a while. Hugh From Jeffrey.Melin at nasa.gov Thu Sep 6 13:56:49 2012 From: Jeffrey.Melin at nasa.gov (Jeff Melin) Date: Thu, 6 Sep 2012 06:56:49 -0700 Subject: [rhelv6-list] How to add a new browser bookmark to be visible to new accounts In-Reply-To: References: Message-ID: <5048ABA1.1070704@nasa.gov> Hi, Mirko, This link is helpful: http://support.mozilla.org/en-US/questions/855435#answer-222255 On RHEL 6 the directory to place the bookmarks.html file in would be /usr/lib[64]/firefox/defaults/profile. Cheers, Jeff On 09/06/2012 06:08 AM, Mirko Vukovic wrote: > Hello, > > When I start Firefox, it comes up with several (RedHat related) bookmarks on the bookmarks bar. I would like to add another one that will be automatically visible to all new accounts. > > I've read that Firefox stores bookmarks in ~/.mozilla/.../bookmarks.html, but have not found that file in /etc/skel. And I could not figure out how to ask Google the right question. > > Thanks, > > Mirko From janne.blomqvist at aalto.fi Wed Sep 12 08:02:14 2012 From: janne.blomqvist at aalto.fi (Blomqvist Janne) Date: Wed, 12 Sep 2012 08:02:14 +0000 Subject: [rhelv6-list] Partition disappeared? Message-ID: <8B33A3EEE6C26242830E332A4A7A532E6ACE6405@EXMDB05.org.aalto.fi> Hi, I'm seeing a strange issue where, per dmesg, a partition is found when the kernel is booted, but later it has disappeared and by the time /etc/fstab is parsed and filesystems mounted it cannot be found and bootup thus fails. >From dmesg: sd 0:0:0:1: [sda] 143305920 512-byte logical blocks: (73.3 GB/68.3 GiB) sd 0:0:0:1: [sda] Write Protect is off sd 0:0:0:1: [sda] Mode Sense: 6b 00 00 08 sd 0:0:0:1: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA sda: sda1 sda2 Later on sda1 has disappeared (e.g., /dev/sda1 doesn't exist), although with parted I can still find it: GNU Parted 2.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: HP LOGICAL VOLUME (scsi) Disk /dev/sda: 73.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 525MB 524MB primary ext4 boot 2 525MB 73.4GB 72.8GB primary lvm I haven't found any messages anywhere wrt. sda1 disappearing. As you can deduce from the above, the partition in question is /boot. So the partition itself seems Ok, considering grub can read and boot the kernel there. I just cannot access it from the booted system (commenting out the /boot entry in fstab allows the system to boot successfully). The system in question is currently running RHEL 6.3, originally it was installed as 6.x and upgraded over time. Has anyone seen something similar, or better yet, seen and fixed it? -- Janne Blomqvist From KCollins at chevron.com Fri Sep 14 20:47:03 2012 From: KCollins at chevron.com (Collins, Kevin [BEELINE]) Date: Fri, 14 Sep 2012 20:47:03 +0000 Subject: [rhelv6-list] Partition disappeared? In-Reply-To: <8B33A3EEE6C26242830E332A4A7A532E6ACE6405@EXMDB05.org.aalto.fi> References: <8B33A3EEE6C26242830E332A4A7A532E6ACE6405@EXMDB05.org.aalto.fi> Message-ID: <6F56410FBED1FC41BCA804E16F594B0B1D75A795@chvpkw8xmbx05.chvpk.chevrontexaco.net> So, are you saying that the device file /dev/sda1 doesn't exist or that /dev/sda1 is not getting mounted? If it is the latter case, what happens when you try to mount it? Thanks, Kevin -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Blomqvist Janne Sent: Wednesday, September 12, 2012 1:02 AM To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: [rhelv6-list] Partition disappeared? Hi, I'm seeing a strange issue where, per dmesg, a partition is found when the kernel is booted, but later it has disappeared and by the time /etc/fstab is parsed and filesystems mounted it cannot be found and bootup thus fails. >From dmesg: sd 0:0:0:1: [sda] 143305920 512-byte logical blocks: (73.3 GB/68.3 GiB) sd 0:0:0:1: [sda] Write Protect is off sd 0:0:0:1: [sda] Mode Sense: 6b 00 00 08 sd 0:0:0:1: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA sda: sda1 sda2 Later on sda1 has disappeared (e.g., /dev/sda1 doesn't exist), although with parted I can still find it: GNU Parted 2.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: HP LOGICAL VOLUME (scsi) Disk /dev/sda: 73.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 525MB 524MB primary ext4 boot 2 525MB 73.4GB 72.8GB primary lvm I haven't found any messages anywhere wrt. sda1 disappearing. As you can deduce from the above, the partition in question is /boot. So the partition itself seems Ok, considering grub can read and boot the kernel there. I just cannot access it from the booted system (commenting out the /boot entry in fstab allows the system to boot successfully). The system in question is currently running RHEL 6.3, originally it was installed as 6.x and upgraded over time. Has anyone seen something similar, or better yet, seen and fixed it? -- Janne Blomqvist _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From Janne.Blomqvist at aalto.fi Mon Sep 17 08:12:32 2012 From: Janne.Blomqvist at aalto.fi (Janne Blomqvist) Date: Mon, 17 Sep 2012 11:12:32 +0300 Subject: [rhelv6-list] Partition disappeared? In-Reply-To: <6F56410FBED1FC41BCA804E16F594B0B1D75A795@chvpkw8xmbx05.chvpk.chevrontexaco.net> References: <8B33A3EEE6C26242830E332A4A7A532E6ACE6405@EXMDB05.org.aalto.fi> <6F56410FBED1FC41BCA804E16F594B0B1D75A795@chvpkw8xmbx05.chvpk.chevrontexaco.net> Message-ID: <5056DB70.9090400@aalto.fi> On 2012-09-14T23:47:03 EEST, Collins, Kevin [BEELINE] wrote: > So, are you saying that the device file /dev/sda1 doesn't exist or that /dev/sda1 is not getting mounted? If it is the latter case, what happens when you try to mount it? I'm saying that /dev/sda1 doesn't exist. (In fstab it's mounted by UUID as RHEL6 sets it up out-of-the-box, but it doesn't find the partition that way either). > > Thanks, > > Kevin > > -----Original Message----- > From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Blomqvist Janne > Sent: Wednesday, September 12, 2012 1:02 AM > To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list > Subject: [rhelv6-list] Partition disappeared? > > Hi, > > I'm seeing a strange issue where, per dmesg, a partition is found when the kernel is booted, but later it has disappeared and by the time /etc/fstab is parsed and filesystems mounted it cannot be found and bootup thus fails. > > >From dmesg: > > sd 0:0:0:1: [sda] 143305920 512-byte logical blocks: (73.3 GB/68.3 GiB) > sd 0:0:0:1: [sda] Write Protect is off > sd 0:0:0:1: [sda] Mode Sense: 6b 00 00 08 > sd 0:0:0:1: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA > sda: sda1 sda2 > > Later on sda1 has disappeared (e.g., /dev/sda1 doesn't exist), although with parted I can still find it: > > GNU Parted 2.1 > Using /dev/sda > Welcome to GNU Parted! Type 'help' to view a list of commands. > (parted) p > Model: HP LOGICAL VOLUME (scsi) > Disk /dev/sda: 73.4GB > Sector size (logical/physical): 512B/512B > Partition Table: msdos > > Number Start End Size Type File system Flags > 1 1049kB 525MB 524MB primary ext4 boot > 2 525MB 73.4GB 72.8GB primary lvm > > > I haven't found any messages anywhere wrt. sda1 disappearing. As you can deduce from the above, the partition in question is /boot. So the partition itself seems Ok, considering grub can read and boot the kernel there. I just cannot access it from the booted system (commenting out the /boot entry in fstab allows the system to boot successfully). > > The system in question is currently running RHEL 6.3, originally it was installed as 6.x and upgraded over time. > > Has anyone seen something similar, or better yet, seen and fixed it? > > > -- > Janne Blomqvist > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list -- Janne Blomqvist From b.j.smith at ieee.org Mon Sep 17 14:56:44 2012 From: b.j.smith at ieee.org (Bryan J Smith) Date: Mon, 17 Sep 2012 10:56:44 -0400 Subject: [rhelv6-list] Partition disappeared? In-Reply-To: <5056DB70.9090400@aalto.fi> References: <8B33A3EEE6C26242830E332A4A7A532E6ACE6405@EXMDB05.org.aalto.fi> <6F56410FBED1FC41BCA804E16F594B0B1D75A795@chvpkw8xmbx05.chvpk.chevrontexaco.net> <5056DB70.9090400@aalto.fi> Message-ID: On Mon, Sep 17, 2012 at 4:12 AM, Janne Blomqvist wrote: > I'm saying that /dev/sda1 doesn't exist. (In fstab it's mounted by UUID as > RHEL6 sets it up out-of-the-box, but it doesn't find the partition that way > either). Are there any references to the partition under the tree /dev/disk? I'm curious here why the "Model" is listed as ... "HP LOGICAL VOLUME" I'm curious if there is some sort of vendor driver/modification in the initrd, or some other udev, dm-mpio or other filtering going on (also possibly in the initrd). -- Bryan J Smith - Professional, Technical Annoyance http://www.linkedin.com/in/bjsmith From Janne.Blomqvist at aalto.fi Tue Sep 18 07:12:06 2012 From: Janne.Blomqvist at aalto.fi (Janne Blomqvist) Date: Tue, 18 Sep 2012 10:12:06 +0300 Subject: [rhelv6-list] Partition disappeared? In-Reply-To: References: <8B33A3EEE6C26242830E332A4A7A532E6ACE6405@EXMDB05.org.aalto.fi> <6F56410FBED1FC41BCA804E16F594B0B1D75A795@chvpkw8xmbx05.chvpk.chevrontexaco.net> <5056DB70.9090400@aalto.fi> Message-ID: <50581EC6.8090806@aalto.fi> On 2012-09-17T17:56:44 EEST, Bryan J Smith wrote: > On Mon, Sep 17, 2012 at 4:12 AM, Janne Blomqvist > wrote: >> I'm saying that /dev/sda1 doesn't exist. (In fstab it's mounted by UUID as >> RHEL6 sets it up out-of-the-box, but it doesn't find the partition that way >> either). > > Are there any references to the partition under the tree /dev/disk? Under /dev/disk/by-uuid there is f430afb7-6da6-45c6-a15f-95d010fed1f9 -> ../../sda1 but that is a dangling symlink. For comparison, links to /dev/sda2 are found at dev/disk/by-id/scsi-3600508b1001ca26eb450917ecf4f6567-part2 -> ../../sda2 /dev/disk/by-id/wwn-0x600508b1001ca26eb450917ecf4f6567-part2 -> ../../sda2 /dev/disk/by-path/pci-0000:0c:00.0-scsi-0:0:0:1-part2 -> ../../sda2 (There is no link to sda2 under by-uuid/, maybe because sda2 holds an LVM volume.) > I'm curious here why the "Model" is listed as ... > "HP LOGICAL VOLUME" Hmm, what can I say.. The controller is a HP Smart Array P410i, a "standard" raid controller that HP ships on a lot of their servers. It's controlled by the "hpsa" driver, in older kernel versions the "cciss" driver. The controller itself has some rudimentary volume management functionality, maybe that's why it calls itself "HP LOGICAL VOLUME". > I'm curious if there is some sort of vendor driver/modification in the > initrd, The HP Proliant Support Pack (PSP) is installed, which contains some updated drivers (yes, including hpsa), anything particular to look out for? (That being said, since I cannot access /boot it's difficult to say whether initramfs has been modified..) > or some other udev, dm-mpio or other filtering going on (also > possibly in the initrd). Yes, we use dm-multipath for some FC luns, but the blacklisting there should only affect the creation of the dm-mp devices, no? -- Janne Blomqvist From Werner.Maes at icts.kuleuven.be Tue Sep 18 07:32:18 2012 From: Werner.Maes at icts.kuleuven.be (Werner Maes) Date: Tue, 18 Sep 2012 07:32:18 +0000 Subject: [rhelv6-list] Apache bug PDF's not reverse proxied / disk cached properly, and not accessible Message-ID: <563A79C225D450499476DB8FBAE41680109D11AE@ICTS-S-MBX3.luna.kuleuven.be> Hello Will this fix (available in httpd-2.2.23, release a week ago) be backported in a later version of httpd voor RHEL6.3? https://issues.apache.org/bugzilla/show_bug.cgi?id=49113 Kind regards Werner Maes From riehecky at fnal.gov Tue Sep 18 13:10:47 2012 From: riehecky at fnal.gov (Pat Riehecky) Date: Tue, 18 Sep 2012 08:10:47 -0500 Subject: [rhelv6-list] Apache bug PDF's not reverse proxied / disk cached properly, and not accessible In-Reply-To: <563A79C225D450499476DB8FBAE41680109D11AE@ICTS-S-MBX3.luna.kuleuven.be> References: <563A79C225D450499476DB8FBAE41680109D11AE@ICTS-S-MBX3.luna.kuleuven.be> Message-ID: <505872D7.8010306@fnal.gov> On 09/18/2012 02:32 AM, Werner Maes wrote: > Hello > > Will this fix (available in httpd-2.2.23, release a week ago) be backported in a later version of httpd voor RHEL6.3? > > https://issues.apache.org/bugzilla/show_bug.cgi?id=49113 > > Kind regards > > Werner Maes > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Is there a bugzilla for this in https://bugzilla.redhat.com/ ? Once that exists, if you've got a support contract, I'd recommend contacting support to have them try and get the bugfix moving. Pat -- Pat Riehecky From clarkpa at ornl.gov Tue Sep 18 18:32:47 2012 From: clarkpa at ornl.gov (Clark, Patricia A.) Date: Tue, 18 Sep 2012 14:32:47 -0400 Subject: [rhelv6-list] Partition disappeared? In-Reply-To: Message-ID: >Date: Tue, 18 Sep 2012 10:12:06 +0300 >From: Janne Blomqvist >To: "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" > >Subject: Re: [rhelv6-list] Partition disappeared? >Message-ID: <50581EC6.8090806 at aalto.fi> >Content-Type: text/plain; charset="UTF-8"; format=flowed > >On 2012-09-17T17:56:44 EEST, Bryan J Smith wrote: >> On Mon, Sep 17, 2012 at 4:12 AM, Janne Blomqvist >> wrote: >>> I'm saying that /dev/sda1 doesn't exist. (In fstab it's mounted by >>>UUID as >>> RHEL6 sets it up out-of-the-box, but it doesn't find the partition >>>that way >>> either). >> >> Are there any references to the partition under the tree /dev/disk? > >Under /dev/disk/by-uuid there is > >f430afb7-6da6-45c6-a15f-95d010fed1f9 -> ../../sda1 > >but that is a dangling symlink. For comparison, links to /dev/sda2 are >found at > >dev/disk/by-id/scsi-3600508b1001ca26eb450917ecf4f6567-part2 -> >../../sda2 >/dev/disk/by-id/wwn-0x600508b1001ca26eb450917ecf4f6567-part2 -> >../../sda2 >/dev/disk/by-path/pci-0000:0c:00.0-scsi-0:0:0:1-part2 -> ../../sda2 > >(There is no link to sda2 under by-uuid/, maybe because sda2 holds an >LVM volume.) > >> I'm curious here why the "Model" is listed as ... >> "HP LOGICAL VOLUME" > >Hmm, what can I say.. The controller is a HP Smart Array P410i, a >"standard" raid controller that HP ships on a lot of their servers. >It's controlled by the "hpsa" driver, in older kernel versions the >"cciss" driver. The controller itself has some rudimentary volume >management functionality, maybe that's why it calls itself "HP LOGICAL >VOLUME". > >> I'm curious if there is some sort of vendor driver/modification in the >> initrd, > >The HP Proliant Support Pack (PSP) is installed, which contains some >updated drivers (yes, including hpsa), anything particular to look out >for? > >(That being said, since I cannot access /boot it's difficult to say >whether initramfs has been modified..) > >> or some other udev, dm-mpio or other filtering going on (also >> possibly in the initrd). > >Yes, we use dm-multipath for some FC luns, but the blacklisting there >should only affect the creation of the dm-mp devices, no? > > >-- >Janne Blomqvist >>>>>>>>>>>> I have an HP server with the same controller and parted produces the same information, so I don't believe there is an issue there. I've not lost my partition setup and I have 2 different raid devices presented by the HP controller. You snipped your dmesg output and did not include the remaining lines beginning with sd and indicating the attaching of the SCSI disk and the partitions that it identified. Both sda1 and sda2 should be there. Beyond that, dracut should be scanning and identifying the LVMs. If you don't have /dev/sda /dev/sda1 and /dev/sda2, there may be something from hpsa that you need to take a look at. It's possible that something is overwriting the partition table or there is a bad spot on the disk where the partition table resides. Also, /var/log/messages may have something that dmesg doesn't. Patti Clark Sr Linux System Administrator Research and Development Systems Support Oak Ridge National Laboratory From Werner.Maes at icts.kuleuven.be Wed Sep 19 07:32:24 2012 From: Werner.Maes at icts.kuleuven.be (Werner Maes) Date: Wed, 19 Sep 2012 07:32:24 +0000 Subject: [rhelv6-list] Apache bug PDF's not reverse proxied / disk cached properly, and not accessible In-Reply-To: <505872D7.8010306@fnal.gov> References: <563A79C225D450499476DB8FBAE41680109D11AE@ICTS-S-MBX3.luna.kuleuven.be> <505872D7.8010306@fnal.gov> Message-ID: <563A79C225D450499476DB8FBAE41680109D29A8@ICTS-S-MBX3.luna.kuleuven.be> -----Oorspronkelijk bericht----- Van: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] Namens Pat Riehecky Verzonden: dinsdag 18 september 2012 3:11 Aan: rhelv6-list at redhat.com Onderwerp: Re: [rhelv6-list] Apache bug PDF's not reverse proxied / disk cached properly, and not accessible On 09/18/2012 02:32 AM, Werner Maes wrote: > Hello > > Will this fix (available in httpd-2.2.23, release a week ago) be backported in a later version of httpd voor RHEL6.3? > > https://issues.apache.org/bugzilla/show_bug.cgi?id=49113 > > Kind regards > > Werner Maes > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Is there a bugzilla for this in https://bugzilla.redhat.com/ ? Once that exists, if you've got a support contract, I'd recommend contacting support to have them try and get the bugfix moving. Pat I've opened a support case. Do I still need to open a bug report? Werner From Werner.Maes at icts.kuleuven.be Wed Sep 19 07:37:55 2012 From: Werner.Maes at icts.kuleuven.be (Werner Maes) Date: Wed, 19 Sep 2012 07:37:55 +0000 Subject: [rhelv6-list] Apache bug PDF's not reverse proxied / disk cached properly, and not accessible In-Reply-To: <505872D7.8010306@fnal.gov> References: <563A79C225D450499476DB8FBAE41680109D11AE@ICTS-S-MBX3.luna.kuleuven.be> <505872D7.8010306@fnal.gov> Message-ID: <563A79C225D450499476DB8FBAE41680109D29CA@ICTS-S-MBX3.luna.kuleuven.be> Redhat support answers: "The bugzilla BZ#822587 has been raised already against this issue. Since this is internal you cant view this bugzilla BZ#822587 . Currently our engineering team is working on this and will let you know for any major updates on this bugzilla." -----Oorspronkelijk bericht----- Van: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] Namens Pat Riehecky Verzonden: dinsdag 18 september 2012 3:11 Aan: rhelv6-list at redhat.com Onderwerp: Re: [rhelv6-list] Apache bug PDF's not reverse proxied / disk cached properly, and not accessible On 09/18/2012 02:32 AM, Werner Maes wrote: > Hello > > Will this fix (available in httpd-2.2.23, release a week ago) be backported in a later version of httpd voor RHEL6.3? > > https://issues.apache.org/bugzilla/show_bug.cgi?id=49113 > > Kind regards > > Werner Maes > > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Is there a bugzilla for this in https://bugzilla.redhat.com/ ? Once that exists, if you've got a support contract, I'd recommend contacting support to have them try and get the bugfix moving. Pat -- Pat Riehecky _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list From rhelv6-list at redhat.com Fri Sep 21 18:58:56 2012 From: rhelv6-list at redhat.com (Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list) Date: Fri, 21 Sep 2012 14:58:56 -0400 Subject: [rhelv6-list] Red Hat Announces Beta Availability of the Next Minor Release of Red Hat Enterprise Linux 5 Message-ID: <505CB8F0.8020509@redhat.com> Red Hat is pleased to announce the availability of the beta of the next minor release of Red Hat Enterprise Linux 5, Red Hat Enterprise Linux 5.9. The general availability of Red Hat Enterprise Linux 5.9 will mark the end of Production Phase 1 of the Red Hat Enterprise Linux 5 product Life Cycle. The Red Hat Enterprise Linux 5 platform's 10-year Life Cycle enables customers with existing hardware and software investments to confidently and securely remain on Red Hat Enterprise Linux 5 until they are ready to migrate to a newer version of Red Hat Enterprise Linux -- or until the conclusion of the product's Extended Life Phase[1]. Included in this minor release are a broad set of updates to existing features, including new functionality in the areas of virtualization and networking; new support for standards, certifications and security; and enhancements to certificate-based Red Hat Subscription Management. Also included are additions to capabilities for developers and support for some of the latest platforms from our hardware partners. With new drivers for Microsoft Hyper-V that have recently been accepted by the upstream Linux community, customers can now run Red Hat Enterprise Linux 5.9 as a virtual guest on Hyper-V with improved performance. Key functionality in the beta for Red Hat Enterprise Linux 5.9 includes: Hardware Enablement Red Hat Enterprise Linux 5.9 showcases Red Hat's strong relationships with industry-leading hardware vendors. This minor release contains support for the latest CPU, chip set, and device driver enhancements from leading hardware vendors. Security, Standards, and Certifications Security enhancements are fundamental to any Red Hat Enterprise Linux release. Red Hat Enterprise Linux 5.9 includes support for the latest U.S. government password policy requirements. This capability provides IT managers with tighter security controls and the ability to verify and check the robustness of any new password entered by a user. Red Hat Enterprise Linux 5.9 adds support for using FIPS (Federal Information Processing Standard) mode with dmraid root devices. FIPS mode now supports RAID device discovery, RAID set activation, and the creation, removal, rebuilding and displaying of properties. Developer Tools With the introduction of OpenJDK 7, customers can develop and test applications using the latest version of open source Java. Many new SystemTap improvements first introduced in Red Hat Enterprise Linux 6.3 have been added to Red Hat Enterprise Linux 5.9, including compile-server and client support for IPv6 networks, smaller SystemTap files, faster compiles, and compile server support for multiple concurrent connections. Applications Red Hat Enterprise Linux 5.9 includes a new rsyslog5 package that upgrades rsyslog to major version 5. The rsyslog5 package runs faster and is more reliable than existing rsyslog packages. Samba has been updated to version 3.6. New features include fully featured SMB2 support, a reworked print server, and security default improvements for all versions of Samba. Virtualization Customers can easily run Red Hat Enterprise Linux 5.9 as a guest on top of Microsoft Hyper-V, providing enhanced interoperability in a Windows environment. This enhances the usability of Red Hat Enterprise Linux 5 for guests in a heterogeneous, multi-vendor virtualized environment and provides improved flexibility and interoperability for our customers. Red Hat Enterprise Linux 5 now includes the Microsoft Hyper-V Linux drivers, which were recently accepted by the upstream Linux community, improving the overall performance of Red Hat Enterprise Linux 5 as a guest on Microsoft Hyper-V. Red Hat Subscription Management With Red Hat Enterprise Linux 5.9, customers by default will use Red Hat Subscription Management as an enhanced subscription management capability using X.509 certificates. This will allow customers to effectively manage Red Hat Enterprise Linux subscriptions locally and report on subscription distribution and utilization. A number of Red Hat Subscription Manager improvements make migration from Red Hat Network (RHN) Classic to certificate-based Subscription Management easier. In addition, the Subscription Manager user interface is now easier to use and navigate. To take full advantage of the latest features and hardware support available from the leading enterprise class platform, we encourage Red Hat Enterprise Linux 5 customers to consider upgrading to Red Hat Enterprise Linux 6. We greatly appreciate the support we receive from our partners and customers who work closely with us to develop and deliver the highest quality open source enterprise platform available today. Sincerely, The Red Hat Enterprise Linux Team [1] Details of the Red Hat Enterprise Linux Life Cycle are available here: https://access.redhat.com/support/policy/updates/errata/ To read the Red Hat news blog, please visit: http://www.redhat.com/about/news/archive/2012/9/red-hat-announces-beta-availability-for-next-minor-release-red-hat-enterprise-linux-5 To access and download an evaluation copy for Red Hat Enterprise Linux 5.9, please visit: https://access.redhat.com/downloads/ For access to the documentation for Red Hat Enterprise Linux 5.9 including the release notes, please visit: https://access.redhat.com/knowledge/docs/Red_Hat_Enterprise_Linux/ From akrherz at iastate.edu Tue Sep 25 19:27:54 2012 From: akrherz at iastate.edu (Daryl Herzmann) Date: Tue, 25 Sep 2012 14:27:54 -0500 Subject: [rhelv6-list] Nasty bug with writing to resyncing RAID-5 Array In-Reply-To: References: <1481619669.26160.1345140500532.JavaMail.root@email.gat.com> Message-ID: Well howdy, Lo and behold, today's kernel errata has this in the changelog: - [md] raid1, raid10: avoid deadlock during resync/recovery. (Dave Wysochanski) [845464 835613] - [md] raid5: Reintroduce locking in handle_stripe() to avoid racing (Jes Sorensen) [846836 828065] Looks like I can finally update! :) daryl On Mon, Sep 3, 2012 at 9:28 AM, Daryl Herzmann wrote: > On Thu, Aug 16, 2012 at 1:08 PM, David C. Miller > wrote: >> >> >> ----- Original Message ----- >>> From: "Daryl Herzmann" >>> To: "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" >>> Sent: Wednesday, August 15, 2012 7:32:15 AM >>> Subject: Re: [rhelv6-list] Nasty bug with writing to resyncing RAID-5 Array >>> >>> On Sun, Jun 24, 2012 at 12:48 PM, Stephen John Smoogen >>> wrote: >>> > On 23 June 2012 11:04, Daryl Herzmann wrote: >>> >> On Fri, Jun 22, 2012 at 4:03 PM, Stephen John Smoogen >>> >> wrote: >>> >>> On 22 June 2012 14:10, daryl herzmann >>> >>> wrote: >>> >>>> Howdy, >>> >>>> >>> >>>> The RHEL6.3 release notes have a curious entry: >>> >>>> >>> >>>> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/6.3_Technical_Notes/kernel_issues.html >>> >>>> >>> >>>> kernel component >>> >>>> >>> >>>> Due to a race condition, in certain cases, writes to RAID4/5/6 >>> >>>> while the >>> >>>> array is reconstructing could hang the system >>> >>>> >>> >>>> Wow, I am reproducing it frequently here. Simply have a RAID-5 >>> >>>> software >>> >>>> array and do some write IO to it, eventually things start >>> >>>> hanging and the >>> >>>> power button needs to be pressed. >>> >>>> >>> >>>> Oh man. >>> >>> >>> >>> Well the race condition they are mentioning should only happen >>> >>> when >>> >>> the RAID array is reconstructing. This sounds like a different >>> >>> bug/problem. What kind of disks, type of RAID etc. >>> >> >>> >> Thanks for the response. I am not sure of the difference between >>> >> 'reconstructing' and 'resyncing' and/or 'syncing'. The >>> >> reproducing >>> >> case was quite easy for me. >>> >> >>> >> 1. Create a software raid5 >>> >> 2. Immediately then create a filesystem on this raid5, while init >>> >> sync underway >>> >> 3. IO to the RAID device eventually stops, even for the software >>> >> raid5 sync >>> > >>> > Ok reconstructing is where the initial RAID drives pair up with >>> > each >>> > other. Resyncing I believe is where a RAID which has been created >>> > is >>> > putting the data across its raid. Basic cat /proc/mdstat.. if there >>> > is >>> > a line ====> then you are reconstructing the disk array. In the >>> > example you give above, the disks would be reconstructing >>> > >>> > So the next thing to do is why you are able to trigger it >>> > constantly. >>> > That may be due to >>> > CPU Type: >>> > RAM Amount: >>> > Disk controllers: >>> > DIsk types (SATA, SAS, SCSI, PATA): >>> > RAID type: >>> > RAID layout (same controller, different controller, etc): >>> >>> I don't seem to have much issue reproducing, I just had another >>> machine do it this morning. Nehalem processor, 12 GB ram, Dell >>> PowerEdge T400, Perc 6i controller, software raid 5, Seagate 2 TB >>> Barracuda drives... >>> >>> Does anybody have the bugzilla ticket associated with this or perhaps >>> a knowledge base article on it? >>> >>> daryl >>> >> >> I would like to know too. I have not seen this issue yet but I do have some large RAID6 arrays. > > The private bugzilla tracking this is: > > https://bugzilla.redhat.com/show_bug.cgi?id=828065 > > It appears the hope is to resolve this for the RHEL6.4 release. > > daryl From gianluca.cecchi at gmail.com Wed Sep 26 10:25:40 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 26 Sep 2012 12:25:40 +0200 Subject: [rhelv6-list] RH EL 5.8 and multipath with Netapp Message-ID: Hello, I'm going to configure multipath on a server (Dell M610) with RH EL 5.8 (kernel 2.6.18-308.8.2.el5). It will be connected to a Netapp storage array (FAS3240 with fw ONTAP 8.1P3). I found this closed bug (resoled in 2.6.18-238.el5) regarding ALUA problems in 5.5: 619361 In particular there is a reference to disabling ALUA at Linux side putting prio_callout to mpath_prio_ontap instead of mpath_prio_alua in multipath.conf. My /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults (from device-mapper-multipath-0.4.7-48.el5_8.1) still contains this for NETAPP vendor: # device { # vendor "NETAPP" # product "LUN" # getuid_callout "/sbin/scsi_id -g -u -s /block/%n" # prio_callout "/sbin/mpath_prio_ontap /dev/%n" # features "1 queue_if_no_path" # hardware_handler "0" # path_grouping_policy group_by_prio # failback immediate # rr_weight uniform # rr_min_io 128 # path_checker directio # } Does this mean that keeping the defaults, even if ALUA is configured (and default in this version) on Netapp side, my server will not use it? Is only the doc file wrong or a too conservative default or anything else? Three months ago I ran some preliminary tests without particular problems and without putting anything in multipath.conf, but now I have some doubts about the final config... Anyone already configuring multipath on Netapp? Thanks in advance, Gianluca From gianluca.cecchi at gmail.com Wed Sep 26 13:10:46 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 26 Sep 2012 15:10:46 +0200 Subject: [rhelv6-list] RH EL 5.8 and multipath with Netapp In-Reply-To: References: Message-ID: On Wed, Sep 26, 2012 at 12:25 PM, Gianluca Cecchi wrote: > Hello, > I'm going to configure multipath on a server (Dell M610) with RH EL > 5.8 (kernel 2.6.18-308.8.2.el5). > It will be connected to a Netapp storage array (FAS3240 with fw ONTAP 8.1P3). [snip] > > Anyone already configuring multipath on Netapp? > Thanks in advance, > > Gianluca Actually I then found the "Host Utilities Installation and setup 6.0" guide from NetApp with reference to multipath config for rh el 5.6 and up. So my starting config will be this one: device { vendor "NETAPP" product "LUN" getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout "/sbin/mpath_prio_alua /dev/%n" features "1 queue_if_no_path" hardware_handler "1 alua" path_grouping_policy group_by_prio failback immediate rr_weight uniform rr_min_io 128 path_checker directio } } It could be interesting to propose a different default, perhaps? Gianluca From ad+lists at uni-x.org Wed Sep 26 13:17:56 2012 From: ad+lists at uni-x.org (Alexander Dalloz) Date: Wed, 26 Sep 2012 15:17:56 +0200 Subject: [rhelv6-list] RH EL 5.8 and multipath with Netapp In-Reply-To: References: Message-ID: <50630084.8030404@uni-x.org> Am 26.09.2012 15:10, schrieb Gianluca Cecchi: > On Wed, Sep 26, 2012 at 12:25 PM, Gianluca Cecchi wrote: >> Hello, >> I'm going to configure multipath on a server (Dell M610) with RH EL >> 5.8 (kernel 2.6.18-308.8.2.el5). >> It will be connected to a Netapp storage array (FAS3240 with fw ONTAP 8.1P3). > [snip] >> >> Anyone already configuring multipath on Netapp? >> Thanks in advance, >> >> Gianluca > > Actually I then found the "Host Utilities Installation and setup 6.0" > guide from NetApp with reference to multipath config for rh el 5.6 and > up. So my starting config will be this one: > > device { > vendor "NETAPP" > product "LUN" > getuid_callout "/sbin/scsi_id -g -u -s /block/%n" > prio_callout "/sbin/mpath_prio_alua /dev/%n" > features "1 queue_if_no_path" > hardware_handler "1 alua" > path_grouping_policy group_by_prio > failback immediate > rr_weight uniform > rr_min_io 128 > path_checker directio > } > } > > It could be interesting to propose a different default, perhaps? > > Gianluca I can tell you that ALUA works with NetApp on current RHEL 5 / 6. But you should check the NetApp Compatibility Matrix before using a specific setup. Else you may run in an unsupported mode or even into problems. Out of my head I am not sure whether the Host Utilities 6.0 are valid for RHEL 5. Regards Alexander From dsavage at peaknet.net Thu Sep 27 00:25:34 2012 From: dsavage at peaknet.net (Robert G. (Doc) Savage) Date: Wed, 26 Sep 2012 19:25:34 -0500 Subject: [rhelv6-list] /dev/md0 mismatch_cnt not 0 Message-ID: <1348705534.28597.231.camel@lion.protogeek.org> >From time to time I get this Cron Daemon warning message in root's mail fussing about a local 2.2TB RAID5 array: WARNING: mismatch_cnt is not 0 on /dev/md0 Whenever this appears I run: # echo check > /sys/block/md0/md/sync_action Followed about 24 hours later by: # cat /sys/block/md0/md/mismatch_cnt This is usually 0, so I'm happy -- until the next time the warning message appears. Two questions: 1. How can I prevent this condition from reappearing? 2. Is there a faster way to fix it? Disk I/O really bogs down while sync_action=check is running. --Doc Savage Fairview Heights, IL -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandro.Roth at zurich-airport.com Thu Sep 27 07:05:35 2012 From: Sandro.Roth at zurich-airport.com (Roth, Sandro) Date: Thu, 27 Sep 2012 07:05:35 +0000 Subject: [rhelv6-list] MD devices lost after boot Message-ID: Hi experts I wasn't sure where to post this so I'm sending it to this list. We have a setup which uses lvm over md over multipath devices. (at least that's the plan) According to this article it is supported in RHEL6 (it wasn't in RHEL5) https://access.redhat.com/knowledge/solutions/48634 I created my md device as follows # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/mapper/mpatha /dev/mapper/mpathb /etc/mdadm.conf has the following content DEVICE /dev/mapper/mpath* ARRAY /dev/md/0 metadata=1.2 UUID=52bd4011:f61badd6:0e63e7bb:2448596b name=spch9003.zrh.local:0 Then some lvm stuff on /dev/md0. Everything works fine. But after a reboot the md device won't get assembled automatically. /proc/mdstat shows Personalities : unused devices: And therefore nothing gets mounted either, obviously. I have to # mdadm -As mdadm: /dev/md/0 has been started with 2 drives. # vgchange -ay datavg1 1 logical volume(s) in volume group "datavg1" now active # mount /data I've done my part of googleing and a lot of people suggest creating a new initrd when everything is running. So I went... # dracut -f # init 6 No change, still won't get assembled. Any ideas? I'm sure I'm just missing a configuration step.. # uname -r; rpm -q mdadm 2.6.32-279.5.1.el6.x86_64 mdadm-3.2.3-9.el6.x86_64 Any help would be appreciated. Regards Sandro Roth Systems Engineer IT-Operations Flughafen Z?rich AG Postfach CH-8058 Z?rich-Flughafen www.flughafen-zuerich.ch Tel. +41 (0)43 816 10 58 Mobile +41 (0)76 356 71 19 Fax +41 (0)43 816 76 90 This email message and any attachments are confidential and may be privileged. If you are not the intended recipient, please notify us immediately and destroy the original transmittal. You are hereby notified that any review, copying or distribution of it is strictly prohibited. Thank you for your cooperation. Header information contained in E-mails to and from the company are monitored for operational reasons in accordance with swiss data protection act. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Reschke at behrgroup.com Thu Sep 27 07:21:24 2012 From: Andreas.Reschke at behrgroup.com (Andreas Reschke) Date: Thu, 27 Sep 2012 09:21:24 +0200 Subject: [rhelv6-list] Antwort: MD devices lost after boot In-Reply-To: References: Message-ID: rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 09:05:35: > "Roth, Sandro" > Gesendet von: rhelv6-list-bounces at redhat.com > > 27.09.2012 09:09 > > Bitte antworten an > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > An > > "rhelv6-list at redhat.com" > > Kopie > > Thema > > [rhelv6-list] MD devices lost after boot > > Hi experts > > I wasn?t sure where to post this so I?m sending it to this list. > > We have a setup which uses lvm over md over multipath devices. (at > least that?s the plan) > According to this article it is supported in RHEL6 (it wasn?t in RHEL5) > https://access.redhat.com/knowledge/solutions/48634 > > I created my md device as follows > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/ > mapper/mpatha /dev/mapper/mpathb > > /etc/mdadm.conf has the following content > DEVICE /dev/mapper/mpath* > ARRAY /dev/md/0 metadata=1.2 UUID=52bd4011:f61badd6:0e63e7bb: > 2448596b name=spch9003.zrh.local:0 > > Then some lvm stuff on /dev/md0. Everything works fine. > But after a reboot the md device won?t get assembled automatically. > > /proc/mdstat shows > Personalities : > unused devices: > > And therefore nothing gets mounted either, obviously. > I have to > > # mdadm -As > mdadm: /dev/md/0 has been started with 2 drives. > # vgchange -ay datavg1 > 1 logical volume(s) in volume group "datavg1" now active > # mount /data > > I?ve done my part of googleing and a lot of people suggest creating > a new initrd when everything is running. > So I went? > > # dracut ?f > # init 6 > > No change, still won?t get assembled. > Any ideas? I?m sure I?m just missing a configuration step.. > > # uname -r; rpm -q mdadm > 2.6.32-279.5.1.el6.x86_64 > mdadm-3.2.3-9.el6.x86_64 > > > Any help would be appreciated. > > > Regards > > > Sandro Roth > > Systems Engineer > > IT-Operations > > Flughafen Z?rich AG > Postfach > CH-8058 Z?rich-Flughafen > > www.flughafen-zuerich.ch > > > > Tel. > > +41 (0)43 816 10 58 > > Mobile > > +41 (0)76 356 71 19 > > Fax > > +41 (0)43 816 76 90 > > > > > This email message and any attachments are confidential and may be > privileged. If you are not the intended recipient, please notify us > immediately and destroy the original transmittal. You are hereby > notified that any review, copying or distribution of it is strictly > prohibited. Thank you for your cooperation. > > Header information contained in E-mails to and from the company are > monitored for operational reasons in accordance with swiss data > protection act. > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Hi Sandro, I was in the same situation, so I create this script: [root at bgstorals01 ~]# cat /etc/init.d/raid-setup #!/bin/bash # # chkconfig: 2345 07 86 # description: Start the raid0 with mdadm # processname: - # pidfile: - # config: /etc/mdadm.conf # source function library . /etc/init.d/functions RETVAL=0 start() { action $"Starting raid arrays via mdadm: " /sbin/mdadm.static --assemble --scan action $"Setting up Logical Volume Management:" /sbin/lvm.static vgchange -a y --ignorelockingfailure for fs in `grep noauto /etc/fstab | awk '{ print $2; }'` #for fs in `grep noauto /etc/fstab | cut -d\ -f1` do action $"Mount filesystem $fs:" /bin/mount $fs done touch /var/lock/subsys/raid-start } stop() { rm -f /var/lock/subsys/raid-start } case "$1" in start) start ;; stop) stop ;; restart|reload) stop start ;; condrestart) if [ -f /var/lock/subsys/raid-start ]; then stop start fi ;; status) status raid-start RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|condrestart|status}" exit 1 esac exit $RETVAL [root at bgstorals01 ~]# and take this script in the adequate runlevel BEFORE S15mdmonitor and S15mdmpd is starting. [root at bgstorals01 ~]# ls -l /etc/rc3.d/ | grep raid lrwxrwxrwx 1 root root 20 May 30 07:39 S07raid-setup -> ../init.d/raid-setup [root at bgstorals01 ~]# ls -l /etc/rc5.d/ | grep raid lrwxrwxrwx 1 root root 20 May 30 07:39 S07raid-setup -> ../init.d/raid-setup [root at bgstorals01 ~]# Mit freundlichen Gr??en Andreas Reschke ________________________________________________________________ Unix/Linux-Administration Andreas.Reschke at behrgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandro.Roth at zurich-airport.com Thu Sep 27 07:55:35 2012 From: Sandro.Roth at zurich-airport.com (Roth, Sandro) Date: Thu, 27 Sep 2012 07:55:35 +0000 Subject: [rhelv6-list] Antwort: MD devices lost after boot In-Reply-To: References: Message-ID: Hi Andreas Thank you for your input. I?ve thought about this before, creating a simple init script would definitely solve this problem. My opinion is that this should be taken care of ?out of the box?, especially for paying customers. Or at least it should be mentioned in the documentation! I have a support case open for this and will be waiting for an official answer. Just thought I?d ask the lists maybe someone has already gone through this.. Anyway, I?ll use your script in the meantime, so thanks! Cheers Sandro From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Andreas Reschke Sent: Donnerstag, 27. September 2012 09:21 To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: [rhelv6-list] Antwort: MD devices lost after boot rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 09:05:35: > "Roth, Sandro" > > Gesendet von: rhelv6-list-bounces at redhat.com > > 27.09.2012 09:09 > > Bitte antworten an > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > > An > > "rhelv6-list at redhat.com" > > > Kopie > > Thema > > [rhelv6-list] MD devices lost after boot > > Hi experts > > I wasn?t sure where to post this so I?m sending it to this list. > > We have a setup which uses lvm over md over multipath devices. (at > least that?s the plan) > According to this article it is supported in RHEL6 (it wasn?t in RHEL5) > https://access.redhat.com/knowledge/solutions/48634 > > I created my md device as follows > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/ > mapper/mpatha /dev/mapper/mpathb > > /etc/mdadm.conf has the following content > DEVICE /dev/mapper/mpath* > ARRAY /dev/md/0 metadata=1.2 UUID=52bd4011:f61badd6:0e63e7bb: > 2448596b name=spch9003.zrh.local:0 > > Then some lvm stuff on /dev/md0. Everything works fine. > But after a reboot the md device won?t get assembled automatically. > > /proc/mdstat shows > Personalities : > unused devices: > > And therefore nothing gets mounted either, obviously. > I have to > > # mdadm -As > mdadm: /dev/md/0 has been started with 2 drives. > # vgchange -ay datavg1 > 1 logical volume(s) in volume group "datavg1" now active > # mount /data > > I?ve done my part of googleing and a lot of people suggest creating > a new initrd when everything is running. > So I went? > > # dracut ?f > # init 6 > > No change, still won?t get assembled. > Any ideas? I?m sure I?m just missing a configuration step.. > > # uname -r; rpm -q mdadm > 2.6.32-279.5.1.el6.x86_64 > mdadm-3.2.3-9.el6.x86_64 > > > Any help would be appreciated. > > > Regards > > > Sandro Roth > > Systems Engineer > > IT-Operations > > Flughafen Z?rich AG > Postfach > CH-8058 Z?rich-Flughafen > > www.flughafen-zuerich.ch > > > > Tel. > > +41 (0)43 816 10 58 > > Mobile > > +41 (0)76 356 71 19 > > Fax > > +41 (0)43 816 76 90 > > > > > This email message and any attachments are confidential and may be > privileged. If you are not the intended recipient, please notify us > immediately and destroy the original transmittal. You are hereby > notified that any review, copying or distribution of it is strictly > prohibited. Thank you for your cooperation. > > Header information contained in E-mails to and from the company are > monitored for operational reasons in accordance with swiss data > protection act. > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Hi Sandro, I was in the same situation, so I create this script: [root at bgstorals01 ~]# cat /etc/init.d/raid-setup #!/bin/bash # # chkconfig: 2345 07 86 # description: Start the raid0 with mdadm # processname: - # pidfile: - # config: /etc/mdadm.conf # source function library . /etc/init.d/functions RETVAL=0 start() { action $"Starting raid arrays via mdadm: " /sbin/mdadm.static --assemble --scan action $"Setting up Logical Volume Management:" /sbin/lvm.static vgchange -a y --ignorelockingfailure for fs in `grep noauto /etc/fstab | awk '{ print $2; }'` #for fs in `grep noauto /etc/fstab | cut -d\ -f1` do action $"Mount filesystem $fs:" /bin/mount $fs done touch /var/lock/subsys/raid-start } stop() { rm -f /var/lock/subsys/raid-start } case "$1" in start) start ;; stop) stop ;; restart|reload) stop start ;; condrestart) if [ -f /var/lock/subsys/raid-start ]; then stop start fi ;; status) status raid-start RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|condrestart|status}" exit 1 esac exit $RETVAL [root at bgstorals01 ~]# and take this script in the adequate runlevel BEFORE S15mdmonitor and S15mdmpd is starting. [root at bgstorals01 ~]# ls -l /etc/rc3.d/ | grep raid lrwxrwxrwx 1 root root 20 May 30 07:39 S07raid-setup -> ../init.d/raid-setup [root at bgstorals01 ~]# ls -l /etc/rc5.d/ | grep raid lrwxrwxrwx 1 root root 20 May 30 07:39 S07raid-setup -> ../init.d/raid-setup [root at bgstorals01 ~]# Mit freundlichen Gr??en Andreas Reschke ________________________________________________________________ Unix/Linux-Administration Andreas.Reschke at behrgroup.com This email message and any attachments are confidential and may be privileged. If you are not the intended recipient, please notify us immediately and destroy the original transmittal. You are hereby notified that any review, copying or distribution of it is strictly prohibited. Thank you for your cooperation. Header information contained in E-mails to and from the company are monitored for operational reasons in accordance with swiss data protection act. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Reschke at behrgroup.com Thu Sep 27 08:03:49 2012 From: Andreas.Reschke at behrgroup.com (Andreas Reschke) Date: Thu, 27 Sep 2012 10:03:49 +0200 Subject: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot In-Reply-To: References: Message-ID: rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 09:55:35: > "Roth, Sandro" > Gesendet von: rhelv6-list-bounces at redhat.com > > 27.09.2012 09:57 > > Bitte antworten an > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > An > > "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" > > > Kopie > > Thema > > Re: [rhelv6-list] Antwort: MD devices lost after boot > > Hi Andreas > > Thank you for your input. > I?ve thought about this before, creating a simple init script would > definitely solve this problem. > > My opinion is that this should be taken care of ?out of the box?, > especially for paying customers. > Or at least it should be mentioned in the documentation! > I have a support case open for this and will be waiting for an > official answer. > Just thought I?d ask the lists maybe someone has already gone through this.. > > Anyway, I?ll use your script in the meantime, so thanks! > > > Cheers > Sandro > > From: rhelv6-list-bounces at redhat.com [ mailto:rhelv6-list-bounces at redhat.com] > On Behalf Of Andreas Reschke > Sent: Donnerstag, 27. September 2012 09:21 > To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list > Subject: [rhelv6-list] Antwort: MD devices lost after boot > > rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 09:05:35: > > > "Roth, Sandro" > > Gesendet von: rhelv6-list-bounces at redhat.com > > > > 27.09.2012 09:09 > > > > Bitte antworten an > > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > > > > An > > > > "rhelv6-list at redhat.com" > > > > Kopie > > > > Thema > > > > [rhelv6-list] MD devices lost after boot > > > > Hi experts > > > > I wasn?t sure where to post this so I?m sending it to this list. > > > > We have a setup which uses lvm over md over multipath devices. (at > > least that?s the plan) > > According to this article it is supported in RHEL6 (it wasn?t in RHEL5) > > https://access.redhat.com/knowledge/solutions/48634 > > > > I created my md device as follows > > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/ > > mapper/mpatha /dev/mapper/mpathb > > > > /etc/mdadm.conf has the following content > > DEVICE /dev/mapper/mpath* > > ARRAY /dev/md/0 metadata=1.2 UUID=52bd4011:f61badd6:0e63e7bb: > > 2448596b name=spch9003.zrh.local:0 > > > > Then some lvm stuff on /dev/md0. Everything works fine. > > But after a reboot the md device won?t get assembled automatically. > > > > /proc/mdstat shows > > Personalities : > > unused devices: > > > > And therefore nothing gets mounted either, obviously. > > I have to > > > > # mdadm -As > > mdadm: /dev/md/0 has been started with 2 drives. > > # vgchange -ay datavg1 > > 1 logical volume(s) in volume group "datavg1" now active > > # mount /data > > > > I?ve done my part of googleing and a lot of people suggest creating > > a new initrd when everything is running. > > So I went? > > > > # dracut ?f > > # init 6 > > > > No change, still won?t get assembled. > > Any ideas? I?m sure I?m just missing a configuration step.. > > > > # uname -r; rpm -q mdadm > > 2.6.32-279.5.1.el6.x86_64 > > mdadm-3.2.3-9.el6.x86_64 > > > > > > Any help would be appreciated. > > > > > > Regards > > > > > > Sandro Roth > > > > Systems Engineer > > > > IT-Operations > > > > Flughafen Z?rich AG > > Postfach > > CH-8058 Z?rich-Flughafen > > > > www.flughafen-zuerich.ch > > > > > > > > Tel. > > > > +41 (0)43 816 10 58 > > > > Mobile > > > > +41 (0)76 356 71 19 > > > > Fax > > > > +41 (0)43 816 76 90 > > > > > > > > > > This email message and any attachments are confidential and may be > > privileged. If you are not the intended recipient, please notify us > > immediately and destroy the original transmittal. You are hereby > > notified that any review, copying or distribution of it is strictly > > prohibited. Thank you for your cooperation. > > > > Header information contained in E-mails to and from the company are > > monitored for operational reasons in accordance with swiss data > > protection act. > > > > _______________________________________________ > > rhelv6-list mailing list > > rhelv6-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhelv6-list > > Hi Sandro, > I was in the same situation, so I create this script: > > [root at bgstorals01 ~]# cat /etc/init.d/raid-setup > #!/bin/bash > # > # chkconfig: 2345 07 86 > # description: Start the raid0 with mdadm > # processname: - > # pidfile: - > # config: /etc/mdadm.conf > > # source function library > . /etc/init.d/functions > > RETVAL=0 > > start() { > action $"Starting raid arrays via mdadm: " /sbin/ > mdadm.static --assemble --scan > action $"Setting up Logical Volume Management:" /sbin/ > lvm.static vgchange -a y --ignorelockingfailure > for fs in `grep noauto /etc/fstab | awk '{ print $2; }'` > #for fs in `grep noauto /etc/fstab | cut -d\ -f1` > do > action $"Mount filesystem $fs:" /bin/mount $fs > done > touch /var/lock/subsys/raid-start > } > > stop() { > rm -f /var/lock/subsys/raid-start > } > > case "$1" in > start) > start > ;; > stop) > stop > ;; > restart|reload) > stop > start > ;; > condrestart) > if [ -f /var/lock/subsys/raid-start ]; then > stop > start > fi > ;; > status) > status raid-start > RETVAL=$? > ;; > *) > echo $"Usage: $0 {start|stop|restart|condrestart|status}" > exit 1 > esac > > exit $RETVAL > > [root at bgstorals01 ~]# > > and take this script in the adequate runlevel BEFORE S15mdmonitor > and S15mdmpd is starting. > > [root at bgstorals01 ~]# ls -l /etc/rc3.d/ | grep raid > lrwxrwxrwx 1 root root 20 May 30 07:39 S07raid-setup -> ../init.d/raid-setup > [root at bgstorals01 ~]# ls -l /etc/rc5.d/ | grep raid > lrwxrwxrwx 1 root root 20 May 30 07:39 S07raid-setup -> ../init.d/raid-setup > [root at bgstorals01 ~]# > > > Mit freundlichen Gr??en > Andreas Reschke > ________________________________________________________________ > > Unix/Linux-Administration > Andreas.Reschke at behrgroup.com > > This email message and any attachments are confidential and may be > privileged. If you are not the intended recipient, please notify us > immediately and destroy the original transmittal. You are hereby > notified that any review, copying or distribution of it is strictly > prohibited. Thank you for your cooperation. > > Header information contained in E-mails to and from the company are > monitored for operational reasons in accordance with swiss data > protection act. > > _______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Hi Sandro, I wish there were a ?out of the box? resolution. I don't have this problem on my SuSE-system. Please send me a notice when you have a answer from your support case. Mit freundlichen Gr??en Andreas Reschke ________________________________________________________________ Unix/Linux-Administration Andreas.Reschke at behrgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Sep 27 08:12:04 2012 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 27 Sep 2012 10:12:04 +0200 Subject: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot In-Reply-To: References: Message-ID: On Thu, Sep 27, 2012 at 10:03 AM, Andreas Reschke wrote: >> > We have a setup which uses lvm over md over multipath devices. (at >> > least that?s the plan) Just to understand, apart from /boot (referred in tech note), what would be the advantage of using this kind of config instead of configuring mpath0 as a PV1 mpath1 as PV2 and then create VG with PV1 and PV2 and all LVs as mirrored? Gianluca From Sandro.Roth at zurich-airport.com Thu Sep 27 09:11:54 2012 From: Sandro.Roth at zurich-airport.com (Roth, Sandro) Date: Thu, 27 Sep 2012 09:11:54 +0000 Subject: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot In-Reply-To: References: Message-ID: IMHO mdadm is much more sophisticated than lvm mirroring. In my case here, we have two storage subsystems, one in each data center. An LVM mirror needs to put a log somewhere, usually on a third disk, we just don't have that! And placing the log in memory makes you rebuild it after every reboot. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Gianluca Cecchi Sent: Donnerstag, 27. September 2012 10:12 To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot On Thu, Sep 27, 2012 at 10:03 AM, Andreas Reschke wrote: >> > We have a setup which uses lvm over md over multipath devices. (at >> > least that?s the plan) Just to understand, apart from /boot (referred in tech note), what would be the advantage of using this kind of config instead of configuring mpath0 as a PV1 mpath1 as PV2 and then create VG with PV1 and PV2 and all LVs as mirrored? Gianluca _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list This email message and any attachments are confidential and may be privileged. If you are not the intended recipient, please notify us immediately and destroy the original transmittal. You are hereby notified that any review, copying or distribution of it is strictly prohibited. Thank you for your cooperation. Header information contained in E-mails to and from the company are monitored for operational reasons in accordance with swiss data protection act. From jean-yves at lenhof.eu.org Thu Sep 27 09:31:46 2012 From: jean-yves at lenhof.eu.org (jean-yves at lenhof.eu.org) Date: Thu, 27 Sep 2012 11:31:46 +0200 Subject: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot In-Reply-To: References: Message-ID: <4c228e656f963bd4eb07dfb03a3ce0fd@lenhof.eu.org> Le 27/09/2012 11:11, Roth, Sandro a ?crit?: > IMHO mdadm is much more sophisticated than lvm mirroring. > > In my case here, we have two storage subsystems, one in each data > center. > An LVM mirror needs to put a log somewhere, usually on a third disk, > we just don't have that! > And placing the log in memory makes you rebuild it after every > reboot. Not so true anymore... If you create this mirror with "mirror" I agree with you. But it seems there's a new option "raid1" available since 6.3 Look here : https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html Is somebody used this in production ? Regards, JYL From Sandro.Roth at zurich-airport.com Thu Sep 27 10:02:39 2012 From: Sandro.Roth at zurich-airport.com (Roth, Sandro) Date: Thu, 27 Sep 2012 10:02:39 +0000 Subject: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot In-Reply-To: <4c228e656f963bd4eb07dfb03a3ce0fd@lenhof.eu.org> References: <4c228e656f963bd4eb07dfb03a3ce0fd@lenhof.eu.org> Message-ID: Well that is interesting! > The new implementation of mirroring leverages MD software RAID, just as for the RAID 4/5/6 implementations. What does that mean exactly? Is LVM actually using the md driver underneath now? Doing some testing now.. -----Original Message----- From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of jean-yves at lenhof.eu.org Sent: Donnerstag, 27. September 2012 11:32 To: rhelv6-list at redhat.com Subject: Re: [rhelv6-list] Antwort: Re: Antwort: MD devices lost after boot Le 27/09/2012 11:11, Roth, Sandro a ?crit?: > IMHO mdadm is much more sophisticated than lvm mirroring. > > In my case here, we have two storage subsystems, one in each data > center. > An LVM mirror needs to put a log somewhere, usually on a third disk, > we just don't have that! > And placing the log in memory makes you rebuild it after every reboot. Not so true anymore... If you create this mirror with "mirror" I agree with you. But it seems there's a new option "raid1" available since 6.3 Look here : https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html Is somebody used this in production ? Regards, JYL _______________________________________________ rhelv6-list mailing list rhelv6-list at redhat.com https://www.redhat.com/mailman/listinfo/rhelv6-list This email message and any attachments are confidential and may be privileged. If you are not the intended recipient, please notify us immediately and destroy the original transmittal. You are hereby notified that any review, copying or distribution of it is strictly prohibited. Thank you for your cooperation. Header information contained in E-mails to and from the company are monitored for operational reasons in accordance with swiss data protection act. From bryan.hepworth at newcastle.ac.uk Thu Sep 27 11:31:26 2012 From: bryan.hepworth at newcastle.ac.uk (Bryan Hepworth) Date: Thu, 27 Sep 2012 11:31:26 +0000 Subject: [rhelv6-list] md raid arrays Message-ID: I've run into a problem with two raid arrays. I'm following various instructions, but would be grateful of another pair of eyes to see whether I'm flogging a dead horse. Any advice will be gratefully received. Bryan /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 592a6d72:81ad8db8:54c15df3:af15dc7c Name : intermediate0 Creation Time : Thu Sep 13 13:27:56 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : ac837fe7:ee6c3ac0:876d9a4e:2eb13c78 Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 08:38:12 2012 Checksum : 53fac407 - correct Events : 5336 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 5 Array State : .A.AAA ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 592a6d72:81ad8db8:54c15df3:af15dc7c Name : intermediate0 Creation Time : Thu Sep 13 13:27:56 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 0668f572:e5c8a024:f6fc9c13:f8719ae0 Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 08:38:12 2012 Checksum : 713753a6 - correct Events : 5336 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 4 Array State : .A.AAA ('A' == active, '.' == missing) /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 592a6d72:81ad8db8:54c15df3:af15dc7c Name : intermediate0 Creation Time : Thu Sep 13 13:27:56 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 744cefa4:a9146029:534df276:f4c1be2b Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 08:38:12 2012 Checksum : 566a2430 - correct Events : 5336 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : .A.AAA ('A' == active, '.' == missing) /dev/sdf1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 592a6d72:81ad8db8:54c15df3:af15dc7c Name : intermediate0 Creation Time : Thu Sep 13 13:27:56 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : 066c08f4:8c2f8a43:1f53a5ca:c0ab8562 Internal Bitmap : 8 sectors from superblock Update Time : Tue Sep 25 10:49:55 2012 Checksum : 4a241b9a - correct Events : 5328 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdg1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 592a6d72:81ad8db8:54c15df3:af15dc7c Name : intermediate0 Creation Time : Thu Sep 13 13:27:56 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 1919567e:50f3676b:fdcd7a07:87d452ee Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 08:38:12 2012 Checksum : c4f562b7 - correct Events : 5336 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : .A.AAA ('A' == active, '.' == missing) /dev/sdh1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 592a6d72:81ad8db8:54c15df3:af15dc7c Name : intermediate0 Creation Time : Thu Sep 13 13:27:56 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : 7e07bd11:132b47d4:d6a490df:623ae1a0 Internal Bitmap : 8 sectors from superblock Update Time : Tue Sep 25 11:59:16 2012 Checksum : 4bdda32e - correct Events : 5329 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AA.AAA ('A' == active, '.' == missing) /dev/sdi1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 30583f17:0b808626:2ae8fa11:38d56935 Name : intermediate1 Creation Time : Thu Sep 13 13:30:09 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 006858c0:9412e0d0:186d7ebf:6676fd24 Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 10:05:24 2012 Checksum : 45120d41 - correct Events : 5245 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 5 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdj1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 30583f17:0b808626:2ae8fa11:38d56935 Name : intermediate1 Creation Time : Thu Sep 13 13:30:09 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : e0971fd8:34165875:917bf2ec:1fafd0ef Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 10:05:24 2012 Checksum : f99887f1 - correct Events : 5245 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 4 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdk1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 30583f17:0b808626:2ae8fa11:38d56935 Name : intermediate1 Creation Time : Thu Sep 13 13:30:09 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : cc984414:3bb3a705:9451a5d3:f7f7563b Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 10:05:24 2012 Checksum : f84644bc - correct Events : 5245 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdl1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 30583f17:0b808626:2ae8fa11:38d56935 Name : intermediate1 Creation Time : Thu Sep 13 13:30:09 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : ca30da9a:d0b035e8:a1650362:4aad3d98 Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 10:05:24 2012 Checksum : 4caea3b0 - correct Events : 5245 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAAAAA ('A' == active, '.' == missing) /dev/sdm1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 30583f17:0b808626:2ae8fa11:38d56935 Name : intermediate1 Creation Time : Thu Sep 13 13:30:09 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : f3ba5ddb:663347d9:bd542dee:eb9d913e Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 19:29:32 2012 Checksum : b0c01463 - correct Events : 5261 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AA.... ('A' == active, '.' == missing) /dev/sdn1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 30583f17:0b808626:2ae8fa11:38d56935 Name : intermediate1 Creation Time : Thu Sep 13 13:30:09 2012 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 3907021954 (1863.01 GiB 2000.40 GB) Array Size : 19535104000 (9315.06 GiB 10001.97 GB) Used Dev Size : 3907020800 (1863.01 GiB 2000.39 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 1290d414:b3658e16:ce014b80:c2e84083 Internal Bitmap : 8 sectors from superblock Update Time : Wed Sep 26 19:29:32 2012 Checksum : fe4b13b4 - correct Events : 5261 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AA.... ('A' == active, '.' == missing) Bryan Hepworth Computing Officer Institute of Genetic Medicine Newcastle University International Centre for Life Newcastle NE1 3BZ From brilong at cisco.com Thu Sep 27 11:37:29 2012 From: brilong at cisco.com (Brian Long) Date: Thu, 27 Sep 2012 07:37:29 -0400 Subject: [rhelv6-list] MD devices lost after boot In-Reply-To: References: Message-ID: <03E3E9B6-C7BF-4179-B325-C2C07EE50820@cisco.com> On Sep 27, 2012, at 3:05 AM, Roth, Sandro wrote: > Hi experts > > I wasn?t sure where to post this so I?m sending it to this list. > > We have a setup which uses lvm over md over multipath devices. (at least that?s the plan) > According to this article it is supported in RHEL6 (it wasn?t in RHEL5) > https://access.redhat.com/knowledge/solutions/48634 > > I created my md device as follows > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/mapper/mpatha /dev/mapper/mpathb Do /dev/mapper/mpatha and mpathb have partition type of "fd - Linux RAID auto"? If not, they won't be set up properly when booting the server. /Brian/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Reschke at behrgroup.com Thu Sep 27 11:49:13 2012 From: Andreas.Reschke at behrgroup.com (Andreas Reschke) Date: Thu, 27 Sep 2012 13:49:13 +0200 Subject: [rhelv6-list] Antwort: Re: MD devices lost after boot In-Reply-To: <03E3E9B6-C7BF-4179-B325-C2C07EE50820@cisco.com> References: <03E3E9B6-C7BF-4179-B325-C2C07EE50820@cisco.com> Message-ID: rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 13:37:29: > Brian Long > Gesendet von: rhelv6-list-bounces at redhat.com > > 27.09.2012 13:39 > > Bitte antworten an > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > An > > "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" > > > Kopie > > Thema > > Re: [rhelv6-list] MD devices lost after boot > > On Sep 27, 2012, at 3:05 AM, Roth, Sandro wrote: > > Hi experts > > I wasn?t sure where to post this so I?m sending it to this list. > > We have a setup which uses lvm over md over multipath devices. (at > least that?s the plan) > According to this article it is supported in RHEL6 (it wasn?t in RHEL5) > https://access.redhat.com/knowledge/solutions/48634 > > I created my md device as follows > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/ > mapper/mpatha /dev/mapper/mpathb > > Do /dev/mapper/mpatha and mpathb have partition type of "fd - Linux > RAID auto"? If not, they won't be set up properly when booting the server. > > /Brian/_______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Hi Brian, sorry you're not right. There is no need for partitions with lvm. You can use the whole disk without partitions. Mit freundlichen Gr??en Andreas Reschke ________________________________________________________________ Unix/Linux-Administration Andreas.Reschke at behrgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brilong at cisco.com Thu Sep 27 12:05:16 2012 From: brilong at cisco.com (Brian Long) Date: Thu, 27 Sep 2012 08:05:16 -0400 Subject: [rhelv6-list] Antwort: Re: MD devices lost after boot In-Reply-To: References: <03E3E9B6-C7BF-4179-B325-C2C07EE50820@cisco.com> Message-ID: <81756250-6464-4275-909C-D63D1B6B67FE@cisco.com> On Sep 27, 2012, at 7:49 AM, Andreas Reschke wrote: > rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 13:37:29: > > > Brian Long > > Gesendet von: rhelv6-list-bounces at redhat.com > > > > 27.09.2012 13:39 > > > > Bitte antworten an > > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > > > > An > > > > "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" > > > > > > Kopie > > > > Thema > > > > Re: [rhelv6-list] MD devices lost after boot > > > > On Sep 27, 2012, at 3:05 AM, Roth, Sandro wrote: > > > > Hi experts > > > > I wasn?t sure where to post this so I?m sending it to this list. > > > > We have a setup which uses lvm over md over multipath devices. (at > > least that?s the plan) > > According to this article it is supported in RHEL6 (it wasn?t in RHEL5) > > https://access.redhat.com/knowledge/solutions/48634 > > > > I created my md device as follows > > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/ > > mapper/mpatha /dev/mapper/mpathb > > > > Do /dev/mapper/mpatha and mpathb have partition type of "fd - Linux > > RAID auto"? If not, they won't be set up properly when booting the server. > > > > /Brian/_______________________________________________ > > rhelv6-list mailing list > > rhelv6-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhelv6-list > > Hi Brian, > > sorry you're not right. There is no need for partitions with lvm. You can use the whole disk without partitions. Something is broken with your email client since it's not maintaining threads properly (and it's prepending "Antwort:"). Last I knew, if you didn't use partitions with MD and/or LVM, certain things wouldn't work properly. I always create a single whole-disk partition and give it a "fd" type. I think this is still the best practice although it's not a requirement. /Brian/ -- Brian Long | | Corporate Security Programs Org . | | | . | | | . ' ' C I S C O -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandro.Roth at zurich-airport.com Thu Sep 27 13:58:43 2012 From: Sandro.Roth at zurich-airport.com (Roth, Sandro) Date: Thu, 27 Sep 2012 13:58:43 +0000 Subject: [rhelv6-list] Antwort: Re: MD devices lost after boot In-Reply-To: <81756250-6464-4275-909C-D63D1B6B67FE@cisco.com> References: <03E3E9B6-C7BF-4179-B325-C2C07EE50820@cisco.com> <81756250-6464-4275-909C-D63D1B6B67FE@cisco.com> Message-ID: I tried that as well, but still not assembling after a reboot. And I think you'll run into problems when you want to resize the LUN or partition in your case. The kernel won't update it's partition table, you'd have to reboot. At least that's my experience. I might be wrong.. From: rhelv6-list-bounces at redhat.com [mailto:rhelv6-list-bounces at redhat.com] On Behalf Of Brian Long Sent: Donnerstag, 27. September 2012 02:05 To: Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list Subject: Re: [rhelv6-list] Antwort: Re: MD devices lost after boot On Sep 27, 2012, at 7:49 AM, Andreas Reschke wrote: rhelv6-list-bounces at redhat.com schrieb am 27.09.2012 13:37:29: > Brian Long > > Gesendet von: rhelv6-list-bounces at redhat.com > > 27.09.2012 13:39 > > Bitte antworten an > "Red Hat Enterprise Linux 6 \(Santiago\) discussion mailing-list" > > > > An > > "Red Hat Enterprise Linux 6 (Santiago) discussion mailing-list" > > > > Kopie > > Thema > > Re: [rhelv6-list] MD devices lost after boot > > On Sep 27, 2012, at 3:05 AM, Roth, Sandro wrote: > > Hi experts > > I wasn't sure where to post this so I'm sending it to this list. > > We have a setup which uses lvm over md over multipath devices. (at > least that's the plan) > According to this article it is supported in RHEL6 (it wasn't in RHEL5) > https://access.redhat.com/knowledge/solutions/48634 > > I created my md device as follows > # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/ > mapper/mpatha /dev/mapper/mpathb > > Do /dev/mapper/mpatha and mpathb have partition type of "fd - Linux > RAID auto"? If not, they won't be set up properly when booting the server. > > /Brian/_______________________________________________ > rhelv6-list mailing list > rhelv6-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhelv6-list Hi Brian, sorry you're not right. There is no need for partitions with lvm. You can use the whole disk without partitions. Something is broken with your email client since it's not maintaining threads properly (and it's prepending "Antwort:"). Last I knew, if you didn't use partitions with MD and/or LVM, certain things wouldn't work properly. I always create a single whole-disk partition and give it a "fd" type. I think this is still the best practice although it's not a requirement. /Brian/ -- Brian Long | | Corporate Security Programs Org . | | | . | | | . ' ' C I S C O This email message and any attachments are confidential and may be privileged. If you are not the intended recipient, please notify us immediately and destroy the original transmittal. You are hereby notified that any review, copying or distribution of it is strictly prohibited. Thank you for your cooperation. Header information contained in E-mails to and from the company are monitored for operational reasons in accordance with swiss data protection act. -------------- next part -------------- An HTML attachment was scrubbed... URL: