Sun Fire X2100 -- nForce4 Ultra desktop chipset

Eugen Leitl eugen at leitl.org
Sun Nov 27 18:55:02 UTC 2005


On Sun, Nov 27, 2005 at 11:19:25AM -0500, Bryan J. Smith wrote:

> Broadcom ServerWorks HT1000 chipset Socket-939 mainboards run only about
> $200, and have a PCI-X slot, and well as two (2) _server_ GbE NICs with
> 96KiB SRAM.  Opteron 142-146 processors are only about $150-175.  Non-
> registered ECC DDR SDRAM is really no premium over non-registered, non-
> ECC.
> 
> It's basically the _same_ design as the Sun Fire x2100 -- Opteron and
> non-registered ECC -- only a heck of a _lot_better_ mainboard.
> 
> As I mentioned, _several_ people here can help you with systems.

Help as: offer a complete kit (sliding rails, IPMI, preassembled
chassis only lacking memory and drives), warranty, and NBD support?
My rack is more than two hours drive away from me (asuming I had
a key for weekends and holidays -- I currently don't). I'd rather
rely on redundancy and reliable components (this is no longer a Newisys
box but native Sun engineering, which has a good reliability track).

> That's because it has a _joke_ of an I/O subsystem.  The problem with

No, it's overhead of NFS, DRBD and limitations of Fast Ethernet.
(Admittedly, 2-3 MByte/s according to bonnie++ is awful performance,
even for DRBD+NFS).

> Mini-ITX systems is the old, slow, shared PCI bus.  You definitely will
> _not_ get past 100MBps on those.  Only a few PCIe Mini-ITX mainboard are
> just coming out.  That will at least help a bit.

I've become somewhat disillusioned with rolling my own mini-ITX blades
with Travla cases and rails. Mostly, it's lack of a decent airflow, especially
for the faster/hotter drives, so packaging density suffers. Also, the only 
advantage is power efficiency, bought for by being somewhat expensive 
(cases and rails, my time) and having low absolute performance
(still, enough for web, mail, HA NFS and a couple of other odd jobs).
Here's a bonnie++ on the local filesystem (load zero, but on the network):
Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
boron            1G  5470  81 11632  10  9652  18  7284  95 42007  33 186.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1210  68 +++++ +++  1027  44  1153  67 +++++ +++   778  41
boron,1G,5470,81,11632,10,9652,18,7284,95,42007,33,186.1,1,16,1210,68,+++++,+++,1027,44,1153,67,+++++,+++,778,41
Here's a single 36 GB Raptor (home server /, Ubuntu x86_64) in comparison:
Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
bruell           2G 34425  63 52549   9 20808   4 37001  68 46619   4 224.2   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  3863  18 +++++ +++  4150  23  3663  18 +++++ +++  1532   7
bruell,2G,34425,63,52549,9,20808,4,37001,68,46619,4,224.2,0,16,3863,18,+++++,+++,4150,23,3663,18,+++++,+++,1532,7
Here's /home (RAID 5, ditto, ST3120213A and XFS):

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
bruell           2G 47025  85 61536  13 12784   2 24817  46 50324   6 302.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   473   3 +++++ +++   457   2   375   2 +++++ +++   302   2
bruell,2G,47025,85,61536,13,12784,2,24817,46,50324,6,302.0,0,16,473,3,+++++,+++,457,2,375,2,+++++,+++,302,2
 
> Sun's not offering any lower power than what you can get from others in
> an Opteron 1xxHE.

Yes, 230 W typical is more than I expected. We'll see how much the box
will draw with the Hitachi drives, which are reasonably power-efficient.
 
> > I've never used it,
> 
> Trying to use a CoRAID box as a failover is about the same as using a
> FireWire device.  You don't have intelligence at the target to really do
> it, so it falls on GFS or something else, just like having _separate_
> (not shared) storage.

As I said, I intended to use a cluster FS, and a few machines in the
rack. AoE is also quite economical if you roll your own (RAID 5 with
one hot spare in a 4-drive SATA Linux box in 1U).
 
> > but if it provides a couple of TBytes at 30-60 MByte/s throughput and
> > failover for a reasonable price (say, 3 k$) that would be sufficient.
> 
> CoRAID doesn't provide true SAN-like failover.  ;->  It requires you to
> run something like GFS, which is a massive amount of overhead.  You

Right now I'm more interested in Lustre or PVFS. It's still pretty
academical at this point as I don't have enough hardware to prototype it.

> might as well use local storage, it won't be any worse.  And CoRAID is
> just slow, without much performance over iSCSI anyway (despite the
> marketing), with a lot less flexibility than iSCSI.
> 
> > Which is why I mentioned a distributed file system
> > spanned over RAID 1 or RAID 5 volumes.
> > I'm not interested in isolated barebones but complete systems,
> > including CPU, some memory, sliding rails and IMPI or other 
> > out of band LOM.
> 
> ASL offers the Monarch 811x series starting at $700+:  
>   http://www.aslab.com/products/storage/monarch8115.html  

Looks good. If I was in the U.S., I'd probably buy from them.
 
> I would recommend the units with a 3Ware card for a little bit more:  
>   http://www.aslab.com/products/storage/monarch8114.html  
> 
> Again, I'm sure other system integrators on this list can help you
> further.
> 
> > I use Raptors in my desktop machines and home servers as the system
> > drive but it is far too small for the storage volume/unit of rack
> > space that I need.
> 
> I understand that, I just wanted to point out the enterprise capacities
> (18, 36, 73, etc...) are available in SATA.
> 
> As I said, if you care about reliability in a commodity capacity (100,
> 200, 300, etc...), be sure to get a drive _explicitly_ tested and rated
> for 24x7 operation, not desktop 8x5 operation.  Such models are the

I am very aware of this. I choose Hitachi (which are not explicitly
rated enterprise class) because the Archive.org folks are using them,
and reporting low failure rate (of course, my batch could be different).

> Seagate NL35, Western Digital Caviar RE and others.  Not sure where the
> Deskstar T7K250 fall -- they might be Hitachi's 24x7 version of the
> Deskstar.

Hitachi officially doesn't have enterprise class products. Deathstars
they're no longer.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://listman.redhat.com/archives/amd64-list/attachments/20051127/e95a739c/attachment.sig>


More information about the amd64-list mailing list