[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[OT] HyperSCSI

Well, it was a success today!

I've got 3 systems I'm playing with.  One is a dual Xeon workstation with an 
IDE drive as the OS drive, and a SCSI Ultra-320 drive for "data".  OS is Red 
Hat 8.0 GPL, kernel is 2.4.18-24.8.0 (from Red Hat).  One is a dual Xeon 2u 
rack server.  One Ultra-160 harddrive for the OS, one Ultra-160 harddrive for 
"data".  OS is Red Hat 8.0 GPL, kernel is 2.4.18-24.8.0 (from Red Hat).  The 
third system is a single p4 system, 2u rack, 2 Ultra-160 disks for the OS.

On the first 2 systems, I have installed the HyperSCSI rpms (rebuilt for SMP), 
and configured them both to share out the "data" drive, as part of the 
"pogolan" group.  No partition table is on either of these drives.  On the 
3'rd system, I installed the HyperSCSI rpm, and configured it as a client, of 
the "pogolan" group.  The client was able to see 2 disks, /dev/sdc and 
/dev/sdd.  On the client, I created LVM physical volumes on both /dev/sdc and 
/dev/sdd.  Then I created an LVM Volume Group, "pogolan", that included both 
/dev/sdc and /dev/sdd.  I then created an LVM Logical Volume, "pogo_lan" that 
took up the entire space of the Volume Group. (30~ megs, from 2 18gig drives, 
one in each of the 2 "server" systems).  I then created an ext3 file system 
on /dev/pogolan/pogo_lan.  I was then able to mount /dev/pogolan/pogo_lan to 
/mnt/pogolan and write data to it!!

Initial hdparm readings are pretty good. -t gives ~8MB/s transfer rate.  -T 
gives over 300MB/s.  Keep in mind, this is on an existing 10/100Mb/s hubbed 
network, with plenty of traffic.  I've effectively reached the limit of the 
network, and I will have to move to a GigE network to get full speeds.

This is all very cool to me.  I'm now trying to research the supposed failover 
capability of HyperSCSI so that I can configure an entire system as a 
failover system, for decent redundancy.  I'm also looking into using Red Hat 
Advanced Server on 2 client systems so that I can have a HA failover cluster 
for serving NFS/SMB off of the LVM volumes.  I also want to try adding in 
another HyperSCSI server, and extending the LVM Volume Group and perhaps 
extending some of the LVM LV's or adding new LV's to the group.

All in all, I have to say that HyperSCSI was VERY easy to setup, and I'm 
looking forward to setting up some advanced storage clusters using HyperSCSI 
technology.  One would assume that you could put OpenGFS on top of the shared 
disks.  The only catch is that HyperSCSI can not be a server and a client at 
the same time.

Any who, if anybody is interested, I can try to setup OpenGFS on these systems 
and let you know the results.

Jesse Keating RHCE MCSE
Mondo DevTeam (www.mondorescue.org)

Was I helpful?  Let others know:

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]