[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [K12OSN] New Building's LTSP Server

I'd suggest buying the parts and building your own box. You get exactly what you want, it's modern, and you can save some cash over buying a "server" box from, say, HP, Dell, etc. For example, I just built a quad-core 3.2GHz AMD box with 8GB DRAM and a 1.5TB SATA disk drive for just under $400. With careful shopping, you could build, say, a 12-core box, with 16GB DRAM, for about $1200. A single-socket 6-processor machine with 16GB DRAM would run about $600.

For storage, yes, SCSI drives are very reliable, no doubt. Like you, I've had SCSI disks last 8+ years. However, SATA drives are so cheap by comparison, and disk drive reliability has generally gotten very good over the years. I'd have no big concerns about going SATA nowadays.

Since everything's Gigabit, personally, I'd recommend two servers, one for the home directories, and another for the storage. Put them all on the same flat network, just for simplicity, with NFS automount (this is very easy to set up). I'm assuming that Adobe Flash (read: CPU hog) is in play. Since all your terminals are P4's and can handle it, you could run Firefox locally and have everything else run from the LTSP server.

The one concern I had with your proposed setup is LTSP bandwidth, specifically with the unmanaged switches. With that many terminals, normally I would suggest Multi-Linking a couple of Gig-E NIC's in the LTSP server. However, the switch needs to support that, and unmanaged switches cannot do that (it's something you need to explicitly configure). So, that means only one Gig-E link to and from your LTSP server for your thin clients.

However, there is a way to deal even with that. It's the same method that I figured out some years ago to have i386, PPC, and SPARC thin clients simultaneously running on my x86 LTSP server. Yep, three different client CPU architectures at the same time! :-) How? You separate groups of clients into different physical segments/subnets (SPARC's on one, PPC's on another, x86 on a third). While all your clients are x86, it also has a very nice side benefit that applies to your situation: spreading bandwidth usage out and relieving congestion.

Consider the standard 2-NIC LTSP server setup. There's the "outside" link and the "inside" link. We'll call the outside link "eth0" and the inside link "eth1" (you can do it either way, but that's just my mood this morning). Eth0 is connected to your main school LAN (we'll say, and eth1 is hooked up to your terminals (we'll say But you discover that 65 clients on the eth1 segment are just too much for that Gig-E segment, and users are complaining that "the system feels slow."

No problem. Drop a second NIC in that server and call it "eth2". Give it and copy 'n' paste an appropriate DHCP scope for this subnet in dhcpd.conf (I just use the "eth1" segment as a template). Take one of your unmanaged switches for the thin client and hook it up to this new eth2, and move some of your thin clients to this new segment. Remember, the new segment is ONLY connected to eth2; it doesn't, and must not, touch eth1 at all. When you fire up a client on this segment, it should get a 192.168.2.x IP address. Now each of your Gig-E NIC's is serving only, say, 30 clients apiece instead of 65. Think of it as a "poor man's Multi-Link".

All you're doing here is simply adding on a second internal LTSP subnet. In your scenario, you could add not just an "eth2", but also an "eth3" and have 22 computers per thin-client segment. That would most certainly solve any bandwidth issues. Obviously one could take this to an "eth4", "eth5", etc. if one wanted to do that and had that many PCI slots. At that point, though, I'd just go for a 10Gb NIC. :-)


Joseph Bishay wrote:

I hope everyone had a good weekend.

So I'm waist-deep in the design of our new building.  Quite the
exercise being in charge of designing nearly everything that runs on
electricity in the building!

I wanted some recommendations for our new LTSP network please.  Details:

1) Total of 65 computers spread out across 3 floors -- library,
computer lab, and study lounge are areas of high concentration.
2) All cabling is cat 6
3) All switches and network cards are gigabit and unmanaged.
4) Terminals are 2-3 year old Pentium 4 computers or inexpensive
modern computers (IE suitable for local apps or thin client)
5) Distribution is Edubuntu
6) The setup is for an elementary school students, teachers, and admin
staff / business meetings.  No specific software except OpenOffice and
7) Budget for all servers needed is $2000 (IE: either one server for
LTSP or if I needed an application server plus home fileserver total
still $2000)

I've had an old Proliant DL360 G3 (
http://h18000.www1.hp.com/products/servers/proliantdl360/ ) humming
along for a year and supporting our old building of 15 computers no
problem.  Has dual Xeon 2.8 Ghz chips and 2 GB of ram and usually is
underutilized.  The bottle-neck is clearly the RAM as every now and
then it hits swap.  In our new setup it certainly won't be
underutilized so I'll certainly need something much more powerful.

Should I look for a cutting edge desktop computer (running an Intel-i
level chip such as this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16883108475 ) or
am I better off getting an actual (older) purpose-built server (Such
as another proliant)?

Final aspect are the hard drives.  Personally I'm a huge fan of SCSI
drives -- both this server and the prior custom-built servers had
rock-solid reliable hard drives that ran non-stop for something like 8
years without a hitch.  Of course they were super-expensive with the
RAID controller, etc.  Would you recommend a SATA raid controller and
SATA drives for such a setup nowadays or stick with SCSI?

Did I miss anything else? :)

Thanks for all your insight!

K12OSN mailing list
K12OSN redhat com
For more info see <http://www.k12os.org>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]