[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [K12OSN] gigabit



Timothy Legge wrote:

On Wed, 2005-01-05 at 10:28, David Trask wrote:


"Support list for opensource software in schools." <k12osn redhat com> on
Wednesday, January 05, 2005 at 9:09 AM +0000 wrote:


The clients don't benefit from anything beyond 100Mb.

Why? If the server has gigabit, the clients have gigabit, and the network
is gigabit...why wouldn't the data be served at gigabit speed? There are
some gigabit cards that etherboot will work with (not sure about
PXE)....I've always wanted to try the whole thing at gigabit speed...I
have gigabit servers and backbone, but my clients are 100baseT....I'd love
to see how much fast it'd be if it was gigabit through and through.



There are probably a couple of reasons.


Many applications and servers aren't tuned to take advantage of the
higher bandwidth.


If you have ever downloaded a kernel via tftp using gigabit speed you
might wonder why it takes so long.  Even though many gigabit cards can
support much larger packets, typically packets are 512 in size.  That
means that there are still kernel_size/512 packets and an equivalent
number of acks.  That means that your bottle neck is no longer the
network but the speed of the tftp server and etherboot driver.  The same
would probably be true for nfs and X.

There is only so much data.  As some one pointed out, 100 Mbs is
sufficient for most thin clients.  The user only reacts so quickly and
is probably the bottle neck.  I have watched dvds on an Xterminal and
the quality was excellent.  Again, there was only so much data being
sent to my terminal.  On a busy network it would be slower but I only
needed a 100 mbs NIC.  Its like hooking fire hose to your outdoor water
outlet.  You probably won't fill a bucket any faster because there is
only so much water pressure.

As I see it, you remove the network as the bottleneck and you are then
limited by the slower of the user or the application/server that you are
using.

I could be way off base on this, its based on my limited (and often
incorrect) understanding of network cards and protocols.

Tim



Tim, you're not off base at all. You're right on. The bandwidth test that I did with TuxType last year demonstrates this very well.


X11 needs such-and-such amount of data per pixel update or keystroke to display screen changes and handle keystrokes. I deliberately went for an app that updated the heck out of the screen at 1024x768x24bit. Oh, remember all the mouse movements and the keystrokes that have to get transmitted as well--can't forget that, either. The max traffic used was 73Mb/sec. TuxType is pretty much updating every pixel every millisecond--very much an "action game."

Now, take something like OpenOffice.org or Firefox. There aren't even remotely close to the volume of pixel updates with either of these applications, so they'll work even on a 10BaseT half-duplex network (switched, of course!!). I know this from using old Power Mac 5200's as thin clients. Before you ask, TuxType is indeed unplayable under such conditions; you do need those 100Mb NICs.

The point to all of this? To say that you are correct. There is only so much data that is needed for any given application's display needs, and my tests show that about 73Mb is the max. That fire hose analogy was great, so, trying not to be totally outdone, here's another. Going to Gig-E on the clients would be like using a semi-tractor-trailer to go shopping for your family's grocery needs for the next month. It'll certainly work, of course, but grocery shopping's not going to make me get rid of my pickup truck and go get a Peterbilt. :-)

--TP
_____________________
Do you GNU!? <http://www.gnu.org>
Be virus- and spam-free with Free/Open Source Software (FOSS). Check it out! <http://www.mozilla.org/thunderbird>



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]