[Linux-cluster] Using GFS without a network?

Keith Hopkins hne at hopnet.net
Wed Sep 7 09:43:13 UTC 2005


Steve Wilcox wrote:
> On Tue, 2005-09-06 at 20:06 -0400, Steve Wilcox wrote:
> 
>>On Wed, 2005-09-07 at 00:57 +0200, Andreas Brosche wrote:
>>
>>
>>>>- Multi-initator SCSI buses do not work with GFS in any meaningful way,
>>>>regardless of what the host controller is.
>>>>Ex: Two machines with different SCSI IDs on their initiator connected to
>>>>the same physical SCSI bus.
>>>
>>>Hmm... don't laugh at me, but in fact that's what we're about to set up.
>>>
>>>I've read in Red Hat's docs that it is "not supported" because of 
>>>performance issues. Multi-initiator buses should comply to SCSI 
>>>standards, and any SCSI-compliant disk should be able to communicate 
>>>with the correct controller, if I've interpreted the specs correctly. Of 
>>>course, you get arbitrary results when using non-compliant hardware... 
>>>What are other issues with multi-initiator buses, other than performance 
>>>loss?
>>
>>I set up a small 2 node cluster this way a while back, just as a testbed
>>for myself.  Much as I suspected, it was severely unstable because of
>>the storage configuration, even occasionally causing both nodes to crash
>>when one was rebooted due to SCSI bus resets.  I tore it down and
>>rebuilt it several times, configuring it as a simple failover cluster
>>with RHEL3 and RHEL4, a GFS cluster under RHEL4 and Fedora4, and as an
>>openSSI cluster using Fedora3.  All tested configurations were equally
>>crash-happy due to the bus resets.  
>>
>>My configuration consisted of a couple of old Compaq deskpro PC's, each
>>with a single ended Symbiosis card (set to different SCSI ID's
>>obviously) and an external DEC BA360 jbod shelf with 6 drives.  The bus
>>resets might be mitigated somewhat by using HVD SCSI and Y-cables with
>>external terminators, but from my previous experience with other
>>clusters that used this technique (DEC ASE and HP-ux service guard), bus
>>resets will always be a thorn in your side without a separate,
>>independent raid controller to act as a go-between.  Calling these
>>configurations simply "not supported" is an understatement - this type
>>of config is guaranteed trouble.  I'd never set up a cluster this way
>>unless I'm the only one using it, and only then if I don't care one
>>little bit about crashes and data corruption.  My two cents.
>>
>>-steve
> 
> 
> 
> Small clarification - Although clusters from DEC, HP, and even
> DigiComWho?Paq's TruCluster can be made to work (sort of) on multi-
> initiator SCSI busses, IIRC it was never a supported option for any of
> them (much like RedHat's offering).  I doubt any sane company would ever
> support that type of config.
> 
> -steve   
> 

HP-UX ServiceGuard words well with multi-initiator SCSI configurations, and is fully supported by HP.  It is sold that way for small 2-4 node clusters when cost is an issue, although FC has become a big favorite (um...money maker) in recent years.  Yes, SCSI bus resets are a pain, but they are handled by HP-UX, not ServiceGuard.

--Keith

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 3487 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20050907/aec01558/attachment.bin>


More information about the Linux-cluster mailing list