AMD64 chipset Linux support ....

Bryan J. Smith b.j.smith at ieee.org
Wed Jul 12 05:58:04 UTC 2006


On Wed, 2006-07-12 at 00:23 -0500, William A. Mahaffey III wrote:
> Probably more like $700.00 total, apparently. $100.00+ does matter,
> ~15% ....

Consider your time swapping out a board/chipset that was specifically
designated as "does not test to 24x7 tolerances" (e.g., i865 v. i875) in
that cost estimate.

> Sure. Finite Element grid/mesh generation & subsequent analysis, often
> coupled w/ CFD grid generation/analysis.
> Fluid-structure interaction problems (deforming SRM propellant grains
> under stress from the flow resulting from their combustion).
> Reasonably well resolved 2-D (i.e., not 3-D, thus I *can* afford
> pretty good 2-D resolution, borderline large-eddy type resolution on
> the fluid side, similar resolution on the solid side since I spend
> most of my CPU time on the fluid side & don't want/need to answer
> needling questions about skimping on resolution *anywhere* in the
> analysis). Turbulent, sub-sonic/transonic, complex internal geometries
> on the fluid side, largish deformations on the solid side. Definitely
> CPU/RAM intensive, not quite so I/O intensive. Local drives large
> enough to catch the results, then processed by another box on the LAN
> (where the Tecplot license lives), but still by me, i.e. only one user
> at any 1 time.
> These boxen are like droids, w/ only 1 master, not dozens like most
> public servers. I/O is less important,

But what's your data rate from the distributed clients to the master(s)?

At what point will the combination of possible local storage and network
communication saturate each other?

That all affects how linearly the application scales.

If you're pushing back just 10MBps to the master per client with at
least 10 clients, you've already saturated a 32-bit PCI bus for just
network -- not including the mass inefficiency of Ethernet or the added
throughout to put to disk (if required).

That's why even 8 years ago**, we used 64-bit PCI NICs for GbE,
typically with all other I/O (storage, etc...) on a separate PCI bus
when we were looking at anything close to a dozen nodes**.

> CPU/RAM/stability are paramount.

They why do you buy i865 solutions when they _fail_ Intel's tolerances
for i875?  We've been through this before William.  I really don't think
you understood anything I said before, and I really wish the best for
you.

Your time easily pays for the premium.  Saving $100/node isn't worth
your time -- before we even look at the performance/bottleneck aspects.

-- Bryan

**NOTE:  I (among others on this list) rolled out such grid computing
for CFD and other, distributed applications with Linux in the late '90s.
Even back then, a single 32-bit PCI bus didn't cut it.  The options are
a little better now with PCIe x1 channels being more standard -- but the
data rate has always increased as well.

Some are using Infiniband right on the HTX (HyperTransport eXtension),
but that's overkill for you.  All I'm advocating is that you don't
bottleneck yourself to the point where you are sacrificing any
scalability beyond 4-5 nodes so you can buy a few more.


-- 
Bryan J. Smith          Professional, technical annoyance
mailto:b.j.smith at ieee.org    http://thebs413.blogspot.com
---------------------------------------------------------
The world is in need of solutions.  Unfortunately, people
seem to be more interested in blindly aligning themselves
with one of only two viewponts -- an "us v. them" debate
that has nothing to do with finding an actual solution.





More information about the amd64-list mailing list