[Linux-cluster] gfs over raid/lvm or any other option?

michael.osullivan at auckland.ac.nz michael.osullivan at auckland.ac.nz
Mon Aug 25 17:30:03 UTC 2008

There are two approaches I have seen that may be suitable:

1) lustre - I didn't like this as it needed two "special" meta-servers and
I was building a smaller storage system;
2) pvfs

I did not use either of these approaches as they focus on keeping the
storage system running, rather than keeping the data highly available.

For my test storage I wanted to build a system that would still present
the stored data even if a single point in the network fails.

I have used iSCSI, mdadm and GFS as follows. I have two storage servers
with alomst 2TB of disk space for storage each. Both of these two servers
present a single logical volume to a 2-node cluster using iSCSI. There are
2 NICs on each storage server, so each volume is accessible via two ports.
There are 2 NICs on each cluster node also. The storage system was
connected to the cluster using some ethernet switches. Using mdadm I have
successfully multipathed each logical volume and then using mdadm again I
have built a RAID-5 device from these two volumes. The raid device is
successfully detected by each cluster node. On this raid device I created
a logical volume using clvm and on that logical volume I built a GFS to
control cluster access to the storage. The GFS has been successfully
mounted on both cluster nodes.

Despite some problems with the cluster (due to my own limited knowledge
about clusters and fencing) I have successfully created and accessed files
on the GFS from both cluster nodes. I am in the process of sorting out the
clustering problems and testing the configuration using IOMeter.

Hope this helps, Mike

More information about the Linux-cluster mailing list