[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [K12OSN] More feedback on Fedora 10 + LTSP



On Mon, Apr 27, 2009 at 7:45 PM, Warren Togami <wtogami redhat com> wrote:
> On 04/27/2009 11:29 AM, David Hopkins wrote:
>>
>> This is my weekend update on what I did and found.
>>
>> First, I deleted the ltspbr0 bridge (brctl delbr ltspbr0 ).  I then
>> reconfigured the formerly slaved eth1 to have the correct IP/subnet
>> info.  After a quick reboot (tftp didn't want to work and I couldn't
>> figure out which service to restart, xinetd wasn't it), all the
>> terminals booted.
>
> Have you confirmed that changing from ltspbr0 to direct ethX makes a
> measurable difference in throughput?

Actually I can't make a claim either way since right now I'm confused
about just how much throughput I am getting without ltspbr0 even
present.  I install iftop as well as ntop and have tried launching
tuxmath on several thin clients simultaneously.

For iftop: The first thin client used appr 38Mb/s just to bring up the
menu. When running, it was using appr 70Mb/s for the game.  The second
client likewise used appr 38Mb/s for the menu. However, the moment I
selected an option from the menu, both instances dropped to around
56Mb/s.  When I added a third client, this dropped again, into the mid
30Mb/s range.  The total throughput stayed constant at around 110Mb/s.
 However, even at 3 clients, the game was not playable (it was very
very slow).  I continued adding clients and iftop continued to report
that the max stayed around 110Mb/s but the bandwidth for each client
dropped with each addition.  At six clients, it was about 18Mb/s per
client.

However, the switch still showed that the link between it and the
server was 1000Mb/s.  So .. why is only about 1/10 of the bandwidth
being used as far as iftop is concerned?

ntop was a little different. The peak for Global Statistics, Traffic
report stated 130Mb/s.  However, looking at the Network Throughput,
All Hosts page,  the Peak value for a 10 second interval for the
server stated 739Mb/s.  if this is really 739Mb/(10sec interval) =
73.9, then it seems to agree with iftop (different runs), similarly
for the hosts, they report values about a factor of 10 above what
iftop reported for peak instant values.

The load average on the server with 6 clients running tuxmath was
1.12.  This is an 8 processor system (32bit Xeon MP cpu 2.8Ghz) with
4Gb of memory.  top showed that there wasn't any swap being used.

I ran a similar test on my newer server, and observed similar
behavior. That server is completely different hardware (Supermicro MB,
64bit Intel Xeon 5410, dual dual-core system)

So, why does it appear that bandwidth is an issue with the switch
somehow  (Amer.com switches).  Or is there something else I can look
at?

Sincerely,
Dave Hopkins


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]