[Linux-cluster] is necesary to to build GFS on top of LVM ?

Patton, Matthew F, CTR, OSD-PA&E Matthew.Patton.ctr at osd.mil
Thu Aug 31 20:54:49 UTC 2006


Classification: UNCLASSIFIED

> I understand that, but what I don not understant yet
> is the "Often we can't extend the end of the
> partition"

most of the time we deal with actual disks, not virtual disks ala SAN LUN's.
SAN LUN's are no different than LVM's logical volumes. What you do with your
EMC software when creating or extending a LUN is not one bit different from
using fdisk to define new physical partitions on physical drives, creating
PV's and VG's and then defining LV's. It just happens without you having to
issue all those commands or know what's really going on inside. With a SAN,
LVM is already happening and transparently to you. So layering on more LVM
isn't very productive. UNLESS, you decide to treat the LUN like a physical
disk and carve it into various partitions. In which case we're back to
square 1.

Only when dealing with actual, raw hard drives (or hardware RAID volumes),
or carved up LUNs do you run into the problem of 2 partitions butted up
against one other and the inability to grow the earlier one because growing
it necessarily means overwriting the one behind it. Tools like Partition
Magic have been around for quite some time and they are quite capable of
moving partitions around the physical disk, but that generally take a lot of
downtime.

> If I can't grow a LUN because the neighbors are too
> close, the LVM is the way to go :-)

You can grow a LUN any which way you want and it doesn't matter to machines
that want to use it as a disk. It's only when you start partitioning that
LUN into more than 1 slice that you run afoul of the partition boundaries
getting in the way. I would just create a LUN for each specific task (say
your common GFS area) and when you run out of space, grow the LUN, grow the
partition, and then grow the FS.

In my lab I'm running a poor man's SAN - siz 72GB drives in Raid10 and RAID5
exported via iSCSI. Each machine gets their own scribble space and so I need
to partition the RAID logical drive accordingly. If I used hard partitions
it would be impossible to change space allocation later so I use LVM to give
each machine it's own partition. and if I need to change it, I can do so
trivially. As far as the client machines are concerned they are dealing with
their very own drive and have no idea I'm giving them space that could be
scattered all over the place. If I partition the iSCSI LUN on the client,
then I either treat it as raw disk and run the risk of sizing stuff wrong,
create just a single partition, or virtualize it again using client LVM.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20060831/5539210b/attachment.htm>


More information about the Linux-cluster mailing list