[K12OSN] Large scale implementation in the pipe, input needed.

Jeff Kinz jkinz at kinz.org
Fri Aug 20 00:28:59 UTC 2004


On Fri, Aug 20, 2004 at 12:23:45AM +0200, Daniel Hedblom wrote:
> Hello Norbert!
> 
> So, a server handling about 150 clients would be how big? Ofcourse those
> wouldnt all be running at the same time at full load. The reason i ask is
> that i would want to be able to do an estimate on just how big of a server
> it would have to be. I want to have solid ground  under my feet and not
> just my own experience.

Hi Dan. 

How much do you have the students environment locked down?

I have a one installation where we have the the Thin clients(TC)
absolutely locked down so they can only run OpenOffice and a browser.
For each TC we try to add about 100MB to the server. 

We want to make sure that swap is not used by the TC's since having to
swap memory over an NFS link is really slow and eats network bandwidth.

So 150 TC = 100MB * 150 = 15000 MB or 15 GB.  Caveat - the more TC's
you add to the server, the smaller memory increment they require.

This is due to the net gain of shared libraries already resident when
many TC's are running the same apps.  So, ultimately this very 
straightforward, linear estimation may cause you to get more memory than
you need.  Of course if you know in advance how many TC's will be
running what apps at the same time you can get a much better estimate of
your memory requirements.  

Now go look up how much more memory costs per GB when you have to buy
multi GB sticks.  (Pricewatch says 1GB stick is $150 to $247, but they
are always low)  2 - 1Gb sticks from HP for the "Proliant" system is
$1149  But .. its warranted for life.. (Funny, so is the stuff from
Crucial.com thats faster and a tenth of the cost... how strange... )

A 4 Gb stick is $1700.00 -- I see a trend... 

If your TC's are not locked down and the students have access to the
entire set of menus on the desktop then you may need more than 100 MB
per TC. (you can process limit them so they can't open umpteen apps
at the same time)


As for CPU requirements, A 2 GHz , low end CPU can run 10 TC's which 
aren't too busy.  For your system with 1 server, you want a machine
with as many SMP CPU on it as you can find. at least four and eight is
better, and you want high end CPU's.  XEON's or whatever the current hot
chip is.  Again, you'll pay a premium over the commodity level
components.  3 to six times as much. (Low end CPU is $60) High end is ?

In addition to being much more expensive, the centralized server
architecture is riskier and more problematic than having a server at
each location.

Its weaknesses are:

	Single point of failure.  If the server goes down, service
	does not degrade "gracefully", it disintegrates. 

	With multiple servers at least many thin clients(TC)
	would still be able to function,

	Increased dependence on unneeded network links.  With the 
	server in a remote location from the TC's it is serving rather
	than at the same location as the TC's it is serving, the risk of 
	a service failure is increased because it is now additionally
	dependent not only on the server running but also on the network 
	links being intact (think backhoes, manhole fires,
	automobile/telephone pole accidents etc..) and dependent on the 
	network equipment, (Router, switches etc)  running as well. A
	final vulnerability is the vendor/supplier who is running those
	comm lines for you.  If they have any issues (strikes, software
	glitches, security compromised), your connections between
	buildings are affected and you lose service.

	Another network issue is the plugged pipe problem.  This much
	X-Windowing will generate a huge amount of traffic.  OK maybe 
	1 GB net with smart switches can handle it.  But what happens
	when you add streaming video being accessed by 40 or 50 TC
	simultaneously?  Many legitimate curricula present their 
	content this way, but as these types of content multiply and
	become more diverse, can your network handle the additional
	traffic?  Better to use a design that inherently reduces the 
	network load from the get go.

	Increased dependence on software integrity.  In the centralized
	scenario, if any single process consumes more than its share
	of machine resources everyone sharing the single server is hit.
	(Setting disk quotas, memory and process limits will help
	prevent this)

	Security issues - One successful attack means all your resources
	get owned and leaves you with no untainted system to use as a
	base to work from.

	Expense?  The initial outlay for the single high end machine
	that would be required to configure a server of the size needed 
	would have a premium of added cost high above the cost
	of the same power/capability obtained by purchasing multiple
	smaller off-shelf machines.
	
	Scaling issues?  multiple versus 1 central server(s), each has
	their own set of issues.


-- 
Linux/Open Source.  Now all your base belongs to you, for free.
============================================================
Idealism:  "Realism applied over a longer time period"

Jeff Kinz, Emergent Research, Hudson, MA.





More information about the K12OSN mailing list