[Linux-cluster] io scheduler and gnbd
Markus Hochholdinger
Markus at hochholdinger.net
Fri Oct 20 14:20:37 UTC 2006
hi,
i'm succesfully using gnbd as a single service for a long time. Now i
discovered a weired problem with the gnbd devices with kernel 2.6.18. I build
the gnbd.ko module out of the cvs tree.
All works fine if you don't do to much an the gnbds. But if you stress test
the devices, the gnbds will hang, e.g. reads and writes hang. If you restart
the gnbd server, the client will continue to read and write until the next
hang.
So i first checked my gnbd servers and tried from 1.01 till 1.03 and the
latest cvs. But the problem is still there. From another gnbd client i had no
problem, with none of these gnbd server versions (i was impressed you can mix
these versions). Also changing the kernel on the gnbd server didn't helped.
So i was stick to the gnbd client with kernel 2.6.18. I have to use this
kernel because of the new hardware. So i tried a little and found out that
changing the default io scheduler for the gnbd devices on the client makes
the hanging write and reads resume. The default scheduler was cfq and with
this i can easily reproduce this behavior. With the deadline scheduler it
doesn't.
So i read a little about io scheduling on linux. And my assumption is a gnbd
device shouldn't need any io scheduling, because the network has no latency
when seeking like a hard disk. On the gnbd server there are getting request
from more than one gnbd client, so scheduled io on the client would mix up
the scheduling on the server. And also the server does its own io scheduling
when writing to the real disk.
So i could use the noop scheduler or have i missed something?
Has anyone on the list more info about io scheduling and gnbd?
--
greetings
eMHa
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20061020/a0e6d037/attachment.sig>
More information about the Linux-cluster
mailing list