[Linux-cluster] GFS over GNBD servers connected to a SAN?

Gary Shi garyshi at gmail.com
Fri Oct 28 17:00:33 UTC 2005


The Administrator's Guide suggests 3 kinds of configurations, the second one
"GFS and GNBD with a SAN", servers running GFS share device export by GNBD
servers. I'm wondering the detail of such configuration. Does it have better
performance because it could distribute the load on single GNBD servers?
Compared to the 3rd way, "GFS and GNBD with Directly Connected Storage",
seems the only difference is we could export the same device through
different GNBD servers. Is it true? For example:

Suppose the SAN exports only 1 logical device, and we have 4 GNBD servers
connect to the SAN, and 32 application servers share the filesystem via GFS.
So the disk on SAN is /dev/sdb on each GNBD server. Can we use "gnbd_export
-d /dev/sdb -e test" to export the device a same name "test" on all GNBD
servers, make every 8 GFS servers share a GNBD server, and the total 32 GFS
nodes finally access the same SAN device?

What configuration is suggested for a high-performance GNBD server? How many
client is fair for a GNBD server?

BTW, is it possible to run NFS service on GFS nodes, and make different
client groups access different NFS servers, resulting in a lot of NFS
clients access a same shared filesystem?

--
regards,
Gary Shi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20051029/5d197a95/attachment.htm>


More information about the Linux-cluster mailing list