[rhelv6-beta-list] My first experiences with RHEL6 beta

Lamar Owen lowen at pari.edu
Thu Jun 17 09:35:46 UTC 2010


On Wednesday, June 16, 2010 08:55:15 pm R P Herrold wrote:
> On Wed, 16 Jun 2010, Lamar Owen wrote:
> I was thinking of you as I drove across I-68 last weekend

:-)

> > On Wednesday, June 16, 2010 10:10:44 am R P Herrold wrote:
> >> - The older methods are almost self documenting on a single
> >> man page; in managing adding a new PV recently, I was toggling
> >> between several
> >
> > If you like a GUI, use palimpsest.
> 
> naw -- The X window system exists to get me room for lots and 
> lots of terminal windows

Oh, I tend to agree with you there.  Even my PowerMac G4 MDD on my desktop mostly exists for terminal windows.... and professional audio production using the ardour-based Mixbus product with iZotope Ozone for mastering.

But I mention palimpsest specifically; I'm sure Red Hat seeks feedback on the package gnome-disk-utility-2.29.0-0.git20100115.4.el6.i686.rpm (in the beta) which contains palimpsest.  

If the 32-bit RHEL6 kernel included XFS support I'd be trying that out; I'm pretty sure I know why it doesn't (silent failure at 16TB occupied space on the filesystem for the 32-bit kernel), but in my case the box I had available to test, a Supermicro dual Xeon box with 4GB of RAM, isn't 64-bit capable, but needed XFS for smaller than 16TB filesystems with data already on them.  So it's running Fedora 13 instead, which has XFS available for the 32-bit kernel.  But the multipathing has issues with my SAN setup; I have to boot the box with the FC cable unplugged, then hotplug the FC after booting.  Still troubleshooting to see if it's my SAN fabric's setup at fault, or the dm-multipath stuff.  I'd just about put money on it being my fabric's setup, which is why I've not filed any bz on it yet.

> > But what happens when disk detect order is nondeterministic?
> 
> I've grown to like LABELled partitions but UUID's s*ck as they 
> are non-mnemonic; I usually take care to post-relabel slash 
> and boot

There are always corner cases, and it's much more likely to get label collisions than UUID collisions.  The one corner case that generates most label collisions that I've experienced, that of accidentally leaving a cloned backup drive in the box, also generates a UUID collision.  But UUID's are almost as ugly as fibre channel WWN's, and just as difficult to remember.  Labels even work for non-ext[234] filesystems; using labels with XFS filesystems here.  But until recently the default LVM setup was prone to collisions with the default volume group and logical volume names.  That's changed to include the hostname in the volume group name.  Which, IMO, is a step forward, but still doesn't protect you from the clone drive left in the box corner case.... (nothing does, except hardcoding devices, and that actually breaks rather badly on some boxes, like my laptop, where the clone device reliably gets /dev/sda ahead of the main drive......)

> > without hardware RAID you can do hot data migrations with 
> > LVM on mounted filesystems that you simply cannot do with 
> > the older tools.

> volatle media -- sure, or at least perhaps -- but as many 
> before me have pointed out, the edge cases are not the general 
> case, 

I'm actually looking more towards the 'replace VMware {ESX|VI3|vSphere} with RHEL6 plus KVM' crowd where you're more likely to have a setup like mine, with something like a pair or more of Dell PE6950's, each with 8 or 16 cores, 32GB or 64GB of RAM, and external disk boxes (I have EMC Clariion, which does its own LUN management, but raw FC boxes aren't uncommon, especially at sites with large storage requirements that don't want to pay EMC or NetApp prices; but even then I might add a LUN to the storage group belonging to that host, and get a different drive order on next boot; labels help in that case, for sure).  These are the folks who use those monster 8U and 12U chassis that look more like an EMC DAE on steroids than anything else, and use distributions like Openfiler (or non-Linux stuff like NexentaStor and FreeNAS).  RHEL certainly can fit that use case quite handily.

I'm backing up some data from one such facility on our CX700; their array started dropping drives (inexpensive commodity SATA really isn't designed or tested for the reliability and availability requirements of enterprise storage, but that's beside the point) and we started backing them up.  15TB and counting of essentially priceless data that represents multiple man-years of labor.  

The RHEL6+KVM crowd is a lot less likely to be able to tolerate downtime (of the host OS) and a lot more likely to need some volume management for the storage containing guests' filesystems.  We are looking possibly at an ESX to KVM migration ourselves, and I typically have 15-20 running guests on our Dell 6950 hosts at any given time.  So I'm closely following this list in particular looking for things that would impact my potential use. 

With the frequency I do LUN migration and resizing operations on the EMC arrays now, I can see the definite utility of LVM for guest storage on a KVM host with lots of drives in multiple RAID groups (there goes the EMC terminology...) for those who prefer to roll their own large-scale storage rather than using EMC, NetApp, or similar.

What we're sorely missing are the GUI and CLI tools for LVM or btrfs that do what Navisphere does for EMC.  Navisphere does, after all, have a killer CLI interface....as well as a good web-based GUI.




More information about the rhelv6-beta-list mailing list