[Linux-cluster] Need some advice, setting up first clustered FS

Robert Gil Robert.Gil at americanhm.com
Fri Jun 15 12:31:59 UTC 2007


James,
 
I have been looking into similar implementations for our testing
environment. We use san everywhere, and since san is so expensive I was
considering using AoE to imitate it and a lower cost.

As you said, you have 3 webservers and 1 fileserver. If you use AoE each
of the 3 servers can mount that device and you can use GFS for the file
locking. Each server will see the SAME disk. If you use LVM on the file
server, you can expand the fileserver as much as you want. Since AoE is
block level storage, you can add additional fileservers and use LVM on
the webservers to expand the AoE disks. If this is to be production I
would add some fault tolerance, with at least channel bonding, if not
two switches for redundancy on the AoE side. When you do this however,
your system will see two sets of disks, and you will need to use
multipathing to handle the multiple paths and create a pseudo device so
in the event of a failure, it is relatively transparent to the OS. 

If you do bonded GigE, your doing pretty well as far as throughput in
comparison to FC. I assume the latencies differ significantly between
GigE and FC, but I don't know what the percent is.

Hope that helps.

Robert Gil
Linux Systems Administrator
American Home Mortgage


-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of James Dyer
Sent: Friday, June 15, 2007 8:05 AM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Need some advice, setting up first clustered FS

I'm trying to set up my first clustered FS, but before I waste time
trying things, only to find they don't work, I thought it would be a
good idea to ask the esteemed members of this list for some opinions.

At the moment, I have three webservers, which share storage via an NFS
mount to a server with 1TB space on it, The file server exports a 800GB
partition to these servers.  The 800GB partition is a stripe over 2
500MB SATA disks. This 800GB partition is syncronised to another server
using Unison every 30 mins.

NFS is really not working for us; hitting all sorts of problems with it.

Additionally, the above solution is obviously not at all fault tolerant,
nor expandable, so it's time to look at other options.

Budget limited at the moment, so really need to stick with the hardware
I've currently got.

The solution I'm thinking of is as follows; I'd like some opinions on
whether or not this is a good idea, or if it's stupid, or impossible
etc.

1- On each of the file servers, keep the existing 800GB raid0 stripe. 
2- Using vblade, present these stripes to both file servers over AoE
3- On each file server, create a raid1 volume of both raid0 stripes
4- Put a gfs filesystem on the raid1 volume, mount on webservers using 
   gfs etc.

Some questions:
1- I'm not sure if stage 3 is do-able or not. I'm not sure if I can
create a raid1 volume from two AoE volumes. Some things I've read say
no, some say perhaps.
2- Can I actually present a device over AoE to the same physical server
it's installed in, or would the volume need to be made from the AoE
device from the other server, and the physical device on this server? 
(think that question kinda makes sense...)

Really keen to make this very expandible in the future, and fault
tolerant, so would expect to move to a raid5/10 system at some point. 
This would be accomplished by having more file servers exporting a
stripe 
over AoE.   At this point, I would imagine I'd have a couple of servers
in 
front of the disk farm servers to actually create the gfs partition, and
it is these servers that the webservers would communicate directly with.

Hope what I've written makes some semblance of sense...

Thanks in advance for any advice/pointers

James


--
July 27th, 2007 - System Administrator Appreciation Day -
http://www.sysadminday.com/




--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list