[Linux-cluster] iSCSI GFS

isplist at logicore.net isplist at logicore.net
Mon Jan 28 17:46:02 UTC 2008


> It's pretty simple to set up. You just need to be familliar with iSCSI
> tools and software RAID tools, all of which are almost certainly in your
> distro's apt/yum repositories.

Figured I would ask. Never know, might be some cool management tools that help 
keep an eye on things. The setup sounds simple enough as you say. 

>> I need a machine which will become the aggregator, plenty of memory,
>> multi-port Ethernet card and of course an FC HBA.
>> FC storage will be attached to this machine. Then, iSCSI storage targets
>> will also export to this machine.
 
> Not quite sure I follow this - you want to use FC storage and combine it
> with iSCSI storage into a bigger iSCSI storage pool? No reasib why not, I
> suppose.

I need relatively small central GFS for the shared data between the servers 
but the rest is for media and such. I'll need to have FC HBA's in every LAMP 
server since it needs access to GFS but media servers, those who only offload, 
don't need access to GFS so why install an FC HBA in those? Rather, I could 
export the FCC storage as part of the aggregate volume so that any server can 
gain access over iSCSI. Seems that would give me more options. 
Am I thinking incorrectly on this?

> Note that software RAID only goes up to RAID 6 (i.e. n+2). So you cannot
> lose more than 2 nodes (FC or iSCSI), otherwise you lose your data.

So basically, can't lose more than one storage chassis. Since they are all 
RAID with hot swap, I should be ok so long as I keep a close eye on it all 
which one needs to anyhow. That's why I wondered about any software tools that 
might help but I'm sure there's a ton out there which will work.
 
> yum install iscsi-target

No?! I saw this a long time ago as a new concept, never looked at it since. 
Wonderful :).

> When a node needs to take over, it fences the other node, connects
> the iSCSI shares, starts up the RAID on them, assumes the floating IP and
> exports the iSCSI/NFS shares.

So this sounds like the complex part then because being able to fail over or 
switch over seems terribly important to me. If one machine is handling all 
this I/O and something happens to it, everything is down until that one 
machine is fixed. 

This, I would need to find a solution for first. I need to better understand 
how I would do this fencing.

Mike






More information about the Linux-cluster mailing list