[Linux-cluster] stress-testing GFS ?

Patton, Matthew F, CTR, OSD-PA&E Matthew.Patton.ctr at osd.mil
Thu Mar 16 19:16:31 UTC 2006


Classification: UNCLASSIFIED
 
just idly wondering what the IO would be if NFS exporting the ext3 or gfs to
the other node and running iozone on 2 such clients. There was no file
contention was there? By that I mean each instance of iozone was writing to
a different directory (both on GFS) so file-level read/write locking wasn't
a factor. Presumably GFS locking is all about keeping the filesystem
meta-data intact. BTW, has anyone applied the idea behind SoftUpdates to
GFS? Say part of the heartbeat is a broadcast of the meta-data changes so
while data blocks might by written syncronously, not every meta-change has
to wait for the FC/array to commit it to disk before continuing? I'm
thinking of what we did for firewall-farm syncronization which was
1xActive/NxPassive and they all could handle each other's network traffic at
any time should the current master drop off with the only streams affected
being those initiated since the last status update message was sent out to
the passive nodes.
 
Would it work such that the nodes vote on a meta-master and all meta-data is
kept in memory and then periodically flushed? Because if each meta-change is
broadcast and each node spools it to local storage, then when it's time to
elect a new master the nodes can consult their transaction histories. Is
there a good paper that describes the detailed inner-workings of GFS aside
from having to read all the code? So far I've found this:
https://open.datacore.ch/DCwiki.open/Wiki.jsp?page=GFS
<https://open.datacore.ch/DCwiki.open/Wiki.jsp?page=GFS> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20060316/619de631/attachment.htm>


More information about the Linux-cluster mailing list