[K12OSN] New Building's LTSP Server

Joseph Bishay joseph.bishay at gmail.com
Mon Apr 18 16:55:03 UTC 2011


Good day Terrell,

On Mon, Apr 18, 2011 at 11:42 AM, Terrell Prude' Jr.
<microman at cmosnetworks.com> wrote:
> I'd suggest buying the parts and building your own box.  You get exactly
> what you want, it's modern, and you can save some cash over buying a
> "server" box from, say, HP, Dell, etc.  For example, I just built a
> quad-core 3.2GHz AMD box with 8GB DRAM and a 1.5TB SATA disk drive for just
> under $400.  With careful shopping, you could build, say, a 12-core box,
> with 16GB DRAM, for about $1200.  A single-socket 6-processor machine with
> 16GB DRAM would run about $600.

May I ask where you obtained your parts from?

> For storage, yes, SCSI drives are very reliable, no doubt.  Like you, I've
> had SCSI disks last 8+ years.  However, SATA drives are so cheap by
> comparison, and disk drive reliability has generally gotten very good over
> the years.  I'd have no big concerns about going SATA nowadays.

Do you have a preference for hardware vs software SATA Raid?

> Since everything's Gigabit, personally, I'd recommend two servers, one for
> the home directories, and another for the storage.  Put them all on the same
> flat network, just for simplicity, with NFS automount (this is very easy to
> set up).  I'm assuming that Adobe Flash (read: CPU hog) is in play.  Since
> all your terminals are P4's and can handle it, you could run Firefox locally
> and have everything else run from the LTSP server.

I do want to consider this option -- I just need to make sure it's not
overly complex since I may not be the only person maintaining it.  I
also do want to make sure I can run firefox locally since that's about
90% of the computer usage.

> normally I would suggest Multi-Linking a couple of Gig-E NIC's in the LTSP
> server.  However, the switch needs to support that, and unmanaged switches
> cannot do that (it's something you need to explicitly configure).  So, that
> means only one Gig-E link to and from your LTSP server for your thin
> clients.

I had specified unmanaged switches as they are cheaper and I've had
good experience with them.  IF it was a deal-breaker or a significant
performance boost I could swing a managed switch but I'd need to
understand all the pros / cons.

< Snipped very clear explanation about aggregating NICs >

> All you're doing here is simply adding on a second internal LTSP subnet.  In
> your scenario, you could add not just an "eth2", but also an "eth3" and have
> 22 computers per thin-client segment.  That would most certainly solve any
> bandwidth issues.  Obviously one could take this to an "eth4", "eth5", etc.
> if one wanted to do that and had that many PCI slots.  At that point,
> though, I'd just go for a 10Gb NIC.  :-)

I had played around with this a few years back before I learned that
my switch can't handle it.  A point that was raised at the time that
the PCI bus on the server wouldn't really be able to suppose two
gigabit NICs at full-speed anyways so it's not that relevant.  Is that
still the case now or is it a matter of the type of motherboard being
used, etc?

Thank you
Joseph




More information about the K12OSN mailing list