[Linux-cluster] Using GFS without a network?

Andreas Brosche karon at gmx.net
Tue Sep 6 22:57:27 UTC 2005


Hi again,

thank you for your replies.

Lon Hohberger wrote:
> On Mon, 2005-09-05 at 22:52 +0200, Andreas Brosche wrote:
> 
>>Long story cut short, we want
>>- GFS on a shared SCSI disk (Performance is not important)
> 
> Using GFS on shared SCSI will work in *some* cases:
> 
> + Shared SCSI RAID arrays with multiple buses work well.  Mid to high
> end here, 

Mid to high end indeed, what we found was in the range of about $5000.

> - Multi-initator SCSI buses do not work with GFS in any meaningful way,
> regardless of what the host controller is.
> Ex: Two machines with different SCSI IDs on their initiator connected to
> the same physical SCSI bus.

Hmm... don't laugh at me, but in fact that's what we're about to set up.

I've read in Red Hat's docs that it is "not supported" because of 
performance issues. Multi-initiator buses should comply to SCSI 
standards, and any SCSI-compliant disk should be able to communicate 
with the correct controller, if I've interpreted the specs correctly. Of 
course, you get arbitrary results when using non-compliant hardware... 
What are other issues with multi-initiator buses, other than performance 
loss?

> The DLM runs over IP, as does the cluster manager.  Additionally, please
> remember that GFS requires fencing, and that most fence-devices are
> IP-enabled.

Hmm. The whole setup is supposed to physically divide two networks, and 
nevertheless provide some kind of shared storage for moving data from 
one network to another. Establishing an ethernet link between the two 
servers would sort of disrupt the whole concept, which is to prevent 
*any* network access from outside into the secure part of the network. 
This is the (strongly simplified) topology:

mid-secure network -- Server1 -- Storage -- Server2 -- secure Network

A potential attacker could use a possible security flaw in the dlm 
service (which is bound to the network interface) to gain access to the 
server on the "secure" side *instantly* when he was able to compromise 
the server on the mid-secure side (hey, it CAN happen). If any sort of 
shared storage can be installed *without* any ethernet link or - ideally 
- any sort of inter-server communication, there is a way to *prove* that 
an attacker cannot establish any kind of connection into the secure net 
(some risks remain, but they have nothing to do with the physical 
connection).

So far, I only see two ways: either sync the filesystems via ethernet 
(maybe via a firewall, which is pointless when the service has a 
security leak; it *is* technically possible to set up a tunnel that way) 
or some solution with administrator interaction (the administrator would 
have to manually "flip a switch" to remount a *local* flie system rw on 
one side, and rw on the other), which is impractical (manpower, 
availability...), but would do the job.

> There is currently no way for GFS to use only a quorum disk for all the
> lock information, and even if it could, performance would be abysmal.

Like I said... performance is not an issue. As an invariant, the 
filesystems could be mounted "cross over", ie. each server has a 
partition only it writes to, and the other only reads from that disk. 
This *can* be done with local filesystems; you *can* disable write 
caching. You cannot, however, disable *read* caching (which seems to be 
buried quite deeply into the kernel), which means you actually have to 
umount and then re-mount (ie, not "mount -o remount") the fs. This means 
that long transfers could block other users for a long time. And 
mounting and umounting the same fs over and over again doesn't exactly 
sound like a good idea... even if it's only mounted ro.

Maykel Moya wrote:
 > El lun, 05-09-2005 a las 22:52 +0200, Andreas Brosche escribió:
 > I recently set up something like that. We use a external HP Smart
 > Array Cluster Storage. It has a separate connection (SCSI cable) to
 > both hosts.

So it is not really a shared bus, but a dual bus configuration.

 > Regards,
 > maykel

Regards, and thanks for the replies,

Andreas




More information about the Linux-cluster mailing list