[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [K12OSN] New Building's LTSP Server





Joseph Bishay wrote:
Good day Terrell,

On Mon, Apr 18, 2011 at 11:42 AM, Terrell Prude' Jr.
<microman cmosnetworks com> wrote:
I'd suggest buying the parts and building your own box.  You get exactly
what you want, it's modern, and you can save some cash over buying a
"server" box from, say, HP, Dell, etc.  For example, I just built a
quad-core 3.2GHz AMD box with 8GB DRAM and a 1.5TB SATA disk drive for just
under $400.  With careful shopping, you could build, say, a 12-core box,
with 16GB DRAM, for about $1200.  A single-socket 6-processor machine with
16GB DRAM would run about $600.

May I ask where you obtained your parts from?

I bought 'em at my local Micro Center on sale.  Gotta love those sales.  :-)


For storage, yes, SCSI drives are very reliable, no doubt.  Like you, I've
had SCSI disks last 8+ years.  However, SATA drives are so cheap by
comparison, and disk drive reliability has generally gotten very good over
the years.  I'd have no big concerns about going SATA nowadays.

Do you have a preference for hardware vs software SATA Raid?

Yes. I prefer hardware RAID whenever possible. There are two reasons. The first is that the work of maintaining the RAID falls to the RAID card's processor instead of your CPU. The second is that the RAID card will abstract the array so that you don't need to worry about how to do /boot, for example. It's just easier, especially for the next person coming in to maintain it.

Since everything's Gigabit, personally, I'd recommend two servers, one for
the home directories, and another for the storage.  Put them all on the same
flat network, just for simplicity, with NFS automount (this is very easy to
set up).  I'm assuming that Adobe Flash (read: CPU hog) is in play.  Since
all your terminals are P4's and can handle it, you could run Firefox locally
and have everything else run from the LTSP server.

I do want to consider this option -- I just need to make sure it's not
overly complex since I may not be the only person maintaining it.  I
also do want to make sure I can run firefox locally since that's about
90% of the computer usage.

Automount's a snap. I had to learn it for the RHEL cert exams, and I couldn't believe how easy it was. Why I hadn't been using it for years before is now beyond me.

normally I would suggest Multi-Linking a couple of Gig-E NIC's in the LTSP
server.  However, the switch needs to support that, and unmanaged switches
cannot do that (it's something you need to explicitly configure).  So, that
means only one Gig-E link to and from your LTSP server for your thin
clients.

I had specified unmanaged switches as they are cheaper and I've had
good experience with them.  IF it was a deal-breaker or a significant
performance boost I could swing a managed switch but I'd need to
understand all the pros / cons.

Pros: you have a lot more flexibility with what you can do. VLANs, MultiLink, broadcast storm protection, etc.

Cons:  they cost more.


All you're doing here is simply adding on a second internal LTSP subnet.  In
your scenario, you could add not just an "eth2", but also an "eth3" and have
22 computers per thin-client segment.  That would most certainly solve any
bandwidth issues.  Obviously one could take this to an "eth4", "eth5", etc.
if one wanted to do that and had that many PCI slots.  At that point,
though, I'd just go for a 10Gb NIC.  :-)

I had played around with this a few years back before I learned that
my switch can't handle it.  A point that was raised at the time that
the PCI bus on the server wouldn't really be able to suppose two
gigabit NICs at full-speed anyways so it's not that relevant.  Is that
still the case now or is it a matter of the type of motherboard being
used, etc?

Depends on your mobo. Back in the dual Athlon MP days, I had a Tyan Tiger MP which had little difficulty keeping up with two Gig-E NIC's. Of course, I was running said NIC's in 64-bit PCI-X slots, each of which was on a separate PCI-X bus. :-) Nowadays, with PCI-e, bandwidth concerns are a total non-issue. Today's monster CPU's have little difficulty pushing multiple 10Gig-E, cards, so multiple Gig-E NIC's, as I'm describing, should be no problem. The key is to keep them on separate PCI buses (PCI-e does this naturally, like SATA does).

--TP


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]