[K12OSN] Servers, K12LTSP, and workstation numbers, etc.

Jim Kronebusch jim at winonacotter.org
Tue Jul 25 18:38:26 UTC 2006


On Tue, 25 Jul 2006 13:06:05 -0500, Les Mikesell wrote
> On Tue, 2006-07-25 at 08:26 -0700, Huck wrote:
> > So Jim,
> > 
> > You just bond two Gig-E NICS, and plug them into  two separate physical 
> > switches and say one switch heads off to LAB A(25 thin clients) and the 
> > other to Building B(20 thin clients).
> > 
> > And you don't have to touch DNS configuration or anything?
> 
> Bonded NICs have to go to the same switch and will help if the
> bottleneck is only within the interface to that switch.  If you
> have many clients connected to other ports on the same switch it
> will probably help. If you have clients distributed across many
> switches connected with a 1 gig backbone, it won't help as
> much to give the server a 2 gig uplink to its nearby switch.

Exactly as Les said.  They need to all go into the same switch.  We have all
of our servers here plugged into the same gigabit switch, so at least
transfers between servers is very fast.  Then there is a gigabit backbone from
the MDF to all IDF's.  We try to keep to no more than 48 clients per gigabit
run.  So if our IDF's have six 24 ports switches we run three gigabit runs and
chain the switches in pairs.  We use a combination of cat5e copper and fiber
to make the runs depending on the length.  And all runs lead back to the main
gigabit switch in the MDF.  As things grow and expand we are exceeding
capacity on our network, especially with the possibility of IP phones in the
near future.  

Hopefully low dollar 10GB technology develops quickly :-)

Another option that would help relieve network bottlenecks is to do as I have
seen in other tutorials and have all application servers talk to the boot
server via a backend network backbone on a completely different set of NIC's
and switches.  In theory I guess that takes the traffic of the servers talking
to each other off of the network used to boot and serve clients.  You could
even use the adaptor teaming to this effect and create bond0 on the client
side with 2 or 3 GB NIC's on say IP scheme 192.168.0.x and create bond1 on the
application server side with 2 or 3 NIC's on another IP scheme of 192.168.1.x
all in the same LTSP boot server.  Then all application servers bonds would be
setup on the 192.168.1.x network as well and then connect them all together
with a gigabit switch.

I would think that could handle a fairly beefy setup.

-- 
This message has been scanned for viruses and
dangerous content by the Cotter Technology 
Department, and is believed to be clean.




More information about the K12OSN mailing list