[Linux-cluster] GFS, iSCSI, multipaths and RAID

JACOB_LIBERMAN at Dell.com JACOB_LIBERMAN at Dell.com
Mon May 19 22:05:26 UTC 2008


Hi Mike,

I took a peak at the diagram.  Does the blue cylinder represent an
Ethernet switch?

You may want to add another switch if it's a full redundant mesh
topology youre after. 

Thanks, Jacob 

> -----Original Message-----
> From: linux-cluster-bounces at redhat.com 
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of 
> Michael O'Sullivan
> Sent: Monday, May 19, 2008 4:15 PM
> To: linux-cluster at redhat.com
> Subject: Re: [Linux-cluster] GFS, iSCSI, multipaths and RAID
> 
> Thanks for your response Wendy. Please see a diagram of the 
> system at http://www.ndsg.net.nz/ndsg_cluster.jpg/view (or 
> http://www.ndsg.net.nz/ndsg_cluster.jpg/image_view_fullscreen 
> for the fullscreen view) that (I hope) explains the setup. We 
> are not using FC as we are building the SAN with commodity 
> components (the total cost of the system was less than NZ 
> $9000). The SAN is designed to hold files for staff and 
> students in our department, I'm not sure exactly what 
> applications will use the GFS. We are using iscsi-target 
> software although we may upgrade to using firmware in the 
> future. We have used CLVM on top of software RAID, I agree 
> there are many levels to this system, but I couldn't find the 
> necessary is hardware/software to implement this in a simpler 
> way. I am hoping the list may be helpful here.
> 
> What I wanted to do was the following:
> 
> Build a SAN from commodity hardware that has no single point 
> of failure and acts like a single file system. The ethernet 
> fabric provide two paths from each server to each storage 
> device (hence two NICs on all the boxes). Each device 
> contains a single logical disk (striped here across two disks 
> for better performance, there is along story behind why we 
> have two disks in each box). These devices (2+) are presented 
> using iSCSI to 2 (or more) servers, but are put together in a 
> RAID-5 configuration so a single failure of a device will not 
> interrupt access to the data.
> 
> I used iSCSI as we use ethernet for cost reasons. I used 
> mdadm for multipath as I could not find another way to get 
> the servers to see two iSCSI portals as a single device. I 
> then used mdadm and raided the two iSCSI disks together to 
> get the RAID-5 configuration I wanted. Finally I had to 
> create a logical volume for the GFS system so that servers 
> could properly access the network RAID array. I am more than 
> happy to change this to make it more effective as long as:
> 
> 1) It doesn't cost very much;
> 2) The no single point of failure property is maintained;
> 3) The servers see the SAN as a single entity (that way 
> devices can be added and removed with a minimum of fuss).
> 
> Thanks again for any help/advice/suggestions. I am very new 
> to implementing storage networks, so any help is great.
> 
> Regards, Mike
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 




More information about the Linux-cluster mailing list