[K12OSN] More feedback on Fedora 10 + LTSP

David Hopkins dahopkins429 at gmail.com
Mon Apr 27 17:51:13 UTC 2009


On Mon, Apr 27, 2009 at 1:35 PM, Les Mikesell <lesmikesell at gmail.com> wrote:
> David Hopkins wrote:
>>
>> ethtool reports that the link is up and 1000Mb, full duplex. (mii-tool
>> says it is 100Mbit but that tool is quite old). dmesg also shows the
>> link up at 1000Mb as well.
>
> Is the switch managed so you can check there as well?  You should see no
> errors in the server 'ifconfig' output or the switch interface statistics
> (or at most a couple if you have replugged the connections). If the switch
> isn't managed, it might have a link light indicator that shows full/half
> duplex.

I tried both SGR24W (24 port Gigabit) and SR24G2W (24 port 10/100, 2
gigabit ports) from amer.com. They are unmanaged websmart switches.
Looking at the systems via the web interface, neither show any network
errors at all. In the case of the SR24G2W, it had been up 46 days
without a single error. Also, ifconfig also shows no network errors.
They report that all connections are full duplex with flow control
enabled.

>> Now, I tested the throughput between the server and the file server
>> using netio.  This reports that I have a sustained throughput of
>> 114MB/sec (or about appr 900Mbit/sec) between my servers on the
>> network.  Since I am using the same NIC and switches for the thin
>> client side, I would expect about the same performance.
>
> Are you testing this with an equal number of concurrent connections as where
> you see tuxtype/math performance fall - and across the same switch
> connections?  Often a duplex mismatch or similar error degrades badly with
> concurrent operations but might not show up with a single connection test.

Yep ... same clients connected, same programs launched.  I'll try the
iftop (other post) and also ntop (completely forgot about it) this
evening to see what that tells me.  Results seem to be the same for
both the CentOS K12LTSP and the FC10+LTSP5 systems. Hopefully, I'll
get hard numbers tonight.

>> I then reverted back to the original server, and noticed the same
>> performance issue. I swapped out cables but that didn't change
>> anything either.
>>
>> The load average on the LTSP server was around 1 with 6 clients
>> running tuxmath.  Memory was about 1Gb. So, my best guess is a network
>> bandwidth issue. Tonight I'll connect my newest server to that switch
>> and see if it performs better. Maybe it is just old hardware.
>
> "Old" Gb switch uplinks should work as well as new ones if the cabling and
> configuration is OK.

Very true, but since it is just moving a cable between two switches
and then rebooting the clients to test, I might as well just to be
sure.




More information about the K12OSN mailing list