From maurice at harddata.com Thu Dec 1 00:06:20 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Wed, 30 Nov 2005 17:06:20 -0700 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- WAS: Sun Fire X2100/nForce4 Ultra desktop In-Reply-To: <1133261933.5023.543.camel@bert64.oviedo.smithconcepts.com> References: <20051109132707.GU2249@leitl.org> <1132998033.5023.254.camel@bert64.oviedo.smithconcepts.com> <20051126205243.GI2249@leitl.org> <1133048976.5023.302.camel@bert64.oviedo.smithconcepts.com> <20051127090227.GM2249@leitl.org> <1133108365.5023.335.camel@bert64.oviedo.smithconcepts.com> <1133261933.5023.543.camel@bert64.oviedo.smithconcepts.com> Message-ID: <438E3E7C.5020803@harddata.com> Bryan J. Smith wrote: >On Sun, 2005-11-27 at 11:19 -0500, Bryan J. Smith wrote: > > >>Broadcom ServerWorks HT1000 chipset Socket-939 mainboards run only about >>$200, and have a PCI-X slot, and well as two (2) _server_ GbE NICs with >>96KiB SRAM. Opteron 142-146 processors are only about $150-175. Non- >>registered ECC DDR SDRAM is really no premium over non-registered, non- >>ECC. >> >> > >FYI, it was an integrator on this list that turned me on to the >SuperMicro H8SSL-i after I mentioned the Broadcom ServerWorks HT1000 >chipset. > > > Gee, thanks Bry! >Some resellers are claiming they are selling the SuperMicro H8SSL-i, >although I thought SuperMicro was against that, and only provided it for >resellers (largely because of their relationship with Intel). Both are >out-of-stock: > > We are, but only in a barebones, with case, power, CPUs. Feel free to add your own disks and RAM. BTW, Eugen, still waiting for the exact spec for the Sunfires. WE would LOVE to have a go against them.. -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Dec 1 07:19:47 2005 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 1 Dec 2005 08:19:47 +0100 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- WAS: Sun Fire X2100/nForce4 Ultra desktop In-Reply-To: <438E3E7C.5020803@harddata.com> References: <20051109132707.GU2249@leitl.org> <1132998033.5023.254.camel@bert64.oviedo.smithconcepts.com> <20051126205243.GI2249@leitl.org> <1133048976.5023.302.camel@bert64.oviedo.smithconcepts.com> <20051127090227.GM2249@leitl.org> <1133108365.5023.335.camel@bert64.oviedo.smithconcepts.com> <1133261933.5023.543.camel@bert64.oviedo.smithconcepts.com> <438E3E7C.5020803@harddata.com> Message-ID: <20051201071947.GZ2249@leitl.org> On Wed, Nov 30, 2005 at 05:06:20PM -0700, Maurice Hilarius wrote: > We are, but only in a barebones, with case, power, CPUs. Feel free to > add your own disks and RAM. > > BTW, Eugen, still waiting for the exact spec for the Sunfires. WE would > LOVE to have a go against them.. Oh, I thought I sent them by private mail. It's effectively this machine here in the barebone configuration http://www.dns-shop.de/dns-shop/showProductDetails.do?productConfigId=-1987838288 albeit at 594 EUR. Sliding rails and IPMI are 120 EUR each. -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From b.j.smith at ieee.org Thu Dec 1 07:50:48 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 30 Nov 2005 23:50:48 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <20051201071947.GZ2249@leitl.org> Message-ID: <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> Eugen Leitl wrote: > Oh, I thought I sent them by private mail. It's effectively > this machine here in the barebone configuration http://www.dns-shop.de/dns-shop/showProductDetails.do?productConfigId=-1987838288 > albeit at 594 EUR. Sliding rails and IPMI are 120 EUR each. Like paying a lot for a desktop nForce4 Ultra with a low-cost Opteron? ;-> BTW, if you really like the Sunfire x2100 for its 4x 2.5" notebook hard drives array, consider the JMR SATAStor which has 6x 2.5" notebook hard drives in a 5.25" bay: http://www.jmr.com/solutions/collateral/serial_ata/SATAStor.pdf The guys over at ExtremeTech took a 12 SATA port 3Ware Escalade 9500S-12 card and two of these and did some benchmarks. Although I will point out the benchmarks are _useless_ because they put the card in a standard 32-bit at 33MHz PCI slot (133MBps theoretical) and ... tada ... they didn't see really anything better than 120MBps! Duh! http://www.extremetech.com/article2/0,1558,1644544,00.asp -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From eugen at leitl.org Thu Dec 1 09:30:12 2005 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 1 Dec 2005 10:30:12 +0100 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> References: <20051201071947.GZ2249@leitl.org> <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> Message-ID: <20051201093012.GP2249@leitl.org> On Wed, Nov 30, 2005 at 11:50:48PM -0800, Bryan J. Smith wrote: > BTW, if you really like the Sunfire x2100 for its 4x 2.5" It's X4100. The X2100 is a different beast. > notebook hard drives array, consider the JMR SATAStor which SAS drives are enterprise class, and have an equivalent price tag. > has 6x 2.5" notebook hard drives in a 5.25" bay: > > http://www.jmr.com/solutions/collateral/serial_ata/SATAStor.pdf That looks interesting. The airflow vents look too narrow, though, and there's no cooling redundancy. Are there any 24/7/365 specced 2.5" SATA drives? I'm not aware of any. Laptop drives seem to frequently fail within a year in a server environment (I actually have a 60 GByte Samsung drive in a mini-ITX system, but it is not currently under load). > The guys over at ExtremeTech took a 12 SATA port 3Ware > Escalade 9500S-12 card and two of these and did some > benchmarks. Although I will point out the benchmarks are > _useless_ because they put the card in a standard > 32-bit at 33MHz PCI slot (133MBps theoretical) and ... tada ... > they didn't see really anything better than 120MBps! Duh! > http://www.extremetech.com/article2/0,1558,1644544,00.asp Ridiculously overpriced. -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From jch at scalix.com Thu Dec 1 11:06:15 2005 From: jch at scalix.com (John Haxby) Date: Thu, 1 Dec 2005 11:06:15 +0000 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <20051201093012.GP2249@leitl.org> References: <20051201071947.GZ2249@leitl.org> Message-ID: <438ED927.1080900@scalix.com> Eugen Leitl wrote: >That looks interesting. The airflow vents look too narrow, though, and >there's no cooling redundancy. Are there any 24/7/365 specced >2.5" SATA drives? I'm not aware of any. Laptop drives seem to frequently >fail within a year in a server environment (I actually have a 60 GByte >Samsung drive in a mini-ITX system, but it is not currently under load). > > It's not SATA, but there's a version of the 60GB, 7200rpm Hitachi TravelStar that is touted as enterprise-class. I have one in my laptop :-) For a while it was significantly more expensive than the non-Enterprise version but now it seems to be about the same price. jch From b.j.smith at ieee.org Thu Dec 1 11:19:57 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 01 Dec 2005 06:19:57 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <20051201093012.GP2249@leitl.org> References: <20051201071947.GZ2249@leitl.org> <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> <20051201093012.GP2249@leitl.org> Message-ID: <1133435997.5126.20.camel@bert64.oviedo.smithconcepts.com> On Thu, 2005-12-01 at 10:30 +0100, Eugen Leitl wrote: > SAS drives are enterprise class, and have an equivalent price tag. Interface has _nothing_ to do with "enterprise" class. The actual drive mechanics have to do with "enterprise" class. But even then, there _are_ SAS-rated versions (for cable length/spec) of these bays too. ;-> > That looks interesting. The airflow vents look too narrow, though, and > there's no cooling redundancy. Are there any 24/7/365 specced > 2.5" SATA drives? Sure! They typically roll of the _exact_same_ line as their SCSI/SAS/FC brothers! ;-> Again, interface has _nothing_ do with rating. It just happens to be that most ATA/SATA drives are commodity 8x5, and most SCSI/SAS/FC are enterprise 24x7 mechanics. But there _is_ overlap. > I'm not aware of any. Laptop drives seem to frequently > fail within a year in a server environment (I actually have a 60 GByte > Samsung drive in a mini-ITX system, but it is not currently under load). It all depends on the drive mechanics. 1.4M hours MTBF is typical "enterprise" mechanics, 24x7 operation, 10-15Krpm spindles, 18, 36, 73, 146GB in 3.5" form-factor. 0.4M hours MTBF is typical "commodity" mechanics, 8x5 (max 14x5) operation, 100, 200, 300GB, etc... in 3.5" form-factor. The "new generation" of "near-line" storage are "commodity" mechanics with better lubs, higher G shock, etc... and tested for "near-line" 24x7 operation to 1.0M hours MTBF. The idea here is that they are powered 24x7, but not necessarily spinning 24x7. As far as 2.5" units, from all the specs I've read, _all_ the SAS units are still the _exact_same_ mechanics as their SATA brethren. Now there are some more "enterprise" 7200rpm 2.5" drives out there now, but they aren't rated as good as typical "enterprise" mechanics. > Ridiculously overpriced. Of course! It's _overkill_! ;-> I just thought I'd point it out, because it had some good pictures of the units. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Thu Dec 1 11:20:27 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 01 Dec 2005 06:20:27 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <20051201093012.GP2249@leitl.org> References: <20051201071947.GZ2249@leitl.org> <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> <20051201093012.GP2249@leitl.org> Message-ID: <1133436027.5126.22.camel@bert64.oviedo.smithconcepts.com> On Thu, 2005-12-01 at 10:30 +0100, Eugen Leitl wrote: > It's X4100. The X2100 is a different beast. Oh, well the X4100 is _not_ nForce4 Ultra. It's AMD8131+8111. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From eugen at leitl.org Thu Dec 1 11:40:05 2005 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 1 Dec 2005 12:40:05 +0100 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133435997.5126.20.camel@bert64.oviedo.smithconcepts.com> References: <20051201071947.GZ2249@leitl.org> <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> <20051201093012.GP2249@leitl.org> <1133435997.5126.20.camel@bert64.oviedo.smithconcepts.com> Message-ID: <20051201114005.GT2249@leitl.org> On Thu, Dec 01, 2005 at 06:19:57AM -0500, Bryan J. Smith wrote: > On Thu, 2005-12-01 at 10:30 +0100, Eugen Leitl wrote: > > SAS drives are enterprise class, and have an equivalent price tag. > > Interface has _nothing_ to do with "enterprise" class. > The actual drive mechanics have to do with "enterprise" class. I never claimed that enterprise class has anything to do with the interface. It just happens SAS drives are targeted for server environments. If anyone is curious about the differences, the following 2003 paper goes into some detail http://eugen.leitl.org/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From b.j.smith at ieee.org Thu Dec 1 17:57:22 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 01 Dec 2005 12:57:22 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <20051201114005.GT2249@leitl.org> References: <20051201071947.GZ2249@leitl.org> <20051201075048.61854.qmail@web34111.mail.mud.yahoo.com> <20051201093012.GP2249@leitl.org> <1133435997.5126.20.camel@bert64.oviedo.smithconcepts.com> <20051201114005.GT2249@leitl.org> Message-ID: <1133459842.5126.41.camel@bert64.oviedo.smithconcepts.com> On Thu, 2005-12-01 at 12:40 +0100, Eugen Leitl wrote: > I never claimed that enterprise class has anything to do with > the interface. It just happens SAS drives are targeted for > server environments. Not all. ;-> In fact, if you check the specs, some of the SAS devices are more "near- line" specifications. They are basically rolling off the _exact_same_line_ as their SATA brethren! > If anyone is curious about the differences, the following 2003 paper > goes into some detail > http://eugen.leitl.org/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf As does my sidebar on "Near-line Enterprise Disk" here: http://www.samag.com/documents/sam0509a/0509a_s1.htm Of this article: http://www.samag.com/documents/sam0509a/ -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From jch at scalix.com Thu Dec 1 21:18:34 2005 From: jch at scalix.com (John Haxby) Date: Thu, 1 Dec 2005 21:18:34 +0000 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133459842.5126.41.camel@bert64.oviedo.smithconcepts.com> References: <20051201071947.GZ2249@leitl.org> Message-ID: <438F68AA.9050307@scalix.com> Bryan J. Smith wrote: >In fact, if you check the specs, some of the SAS devices are more "near- >line" specifications. They are basically rolling off the >_exact_same_line_ as their SATA brethren! > > Forgive my ignorance, but are some better disks selected? Years ago during one summer I worked on transistor testing: I remember that a BC107 could, at one extreme, be a BC109C (high gain, high breakdown voltage) and at the other extreme only make the grade as a BC107A (low gain, low breakdown voltage) -- only a relatively small number made the grade as a BC109C and these were, of course, more expensive. Does a similar thing happen with disk testing? Better tolerances, noise levels, start-up times? Or is it more in the manufacturing and a disk is what it is because of the way it's made? jch From b.j.smith at ieee.org Thu Dec 1 22:02:12 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 1 Dec 2005 14:02:12 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <438F68AA.9050307@scalix.com> Message-ID: <20051201220212.77771.qmail@web34103.mail.mud.yahoo.com> John Haxby wrote: > Forgive my ignorance, but are some better disks selected? What do you think most of the new "near-line" models are? ;-> They are just "commodity" drives that test to higher/better tolerances, even though they come off the same line as the OEM/desktop ones. ;-> > Does a similar thing happen with disk testing? > Better tolerances, noise levels, start-up times? Exactly, when it comes to the same mechanics. Some products are labelled, warrantied differently and sold at a difference price for a different market. > Or is it more in the manufacturing and a disk is what it > is because of the way it's made? There _are_ still drive models that _are_ manufacturered differently. In the 3.5" space, the capacities of 18, 36, 73, 146GB are "enterprise" 24x7 rated. The capacities of 100, 200, etc... through 500GB are the "commodity" 8x5 rated. Some of the "commodity" types are now being tested to higher tolerances and sold as "near-line, network-managed" 24x7. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From maurice at harddata.com Sat Dec 3 00:45:05 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Fri, 02 Dec 2005 17:45:05 -0700 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- WAS: Sun Fire X2100/nForce4 Ultra desktop In-Reply-To: <20051201071947.GZ2249@leitl.org> References: <20051109132707.GU2249@leitl.org> <1132998033.5023.254.camel@bert64.oviedo.smithconcepts.com> <20051126205243.GI2249@leitl.org> <1133048976.5023.302.camel@bert64.oviedo.smithconcepts.com> <20051127090227.GM2249@leitl.org> <1133108365.5023.335.camel@bert64.oviedo.smithconcepts.com> <1133261933.5023.543.camel@bert64.oviedo.smithconcepts.com> <438E3E7C.5020803@harddata.com> <20051201071947.GZ2249@leitl.org> Message-ID: <4390EA91.9010603@harddata.com> Eugen Leitl wrote: >On Wed, Nov 30, 2005 at 05:06:20PM -0700, Maurice Hilarius wrote: > > > >>We are, but only in a barebones, with case, power, CPUs. Feel free to >>add your own disks and RAM. >> >>BTW, Eugen, still waiting for the exact spec for the Sunfires. WE would >>LOVE to have a go against them.. >> >> > >Oh, I thought I sent them by private mail. It's effectively this >machine here in the barebone configuration >http://www.dns-shop.de/dns-shop/showProductDetails.do?productConfigId=-1987838288 >albeit at 594 EUR. Sliding rails and IPMI are 120 EUR each. > > > > > Perfect! Thanks Eugen Funny, if you go to that site they are quoting 660 Euros, NOT 594.. Yes, we build and sell that same config ( NO HD, 512MB, no slides, no IPMI) for 585 Euros. All aluminum chassis, ship weight 9Kg IPMI: Add 65 Euros Genuine from SuperMicro Slides: Add 52 Euros Genuine Accuraide slimline rack slides 80GB: Add 55 Euros WD drive with 3 year warranty. 74GB / 10Krpm add 180 Euros WD Raptor with 5 years warranty. Up to 4 hotswap S-ATA drive bays avilable as an option. CPU used by Hard Data: Genuine AMD boxed processor, 3 years AMD warranty. Sun use OEM tray part with one year from whern they buy, usually less than 6 months when YOU buy . We provide 2 years total system warranty. They provide 1 year of excuses , you pay all costs of freight both ways. We provide advance replacement parts for a minimal charge, they only sell expensive service contracts. We provide 3 year HD, motherboard and power supply warranties ( backed by the manufacturers: WD, SuperMicro, Zippy) They provide 1 year of stalling and excuses "Why didn't you buy the service contract?" They throw in a "free" rack" Sure they do. Tell you what, you buy 40 of ours and we will throw in a "free" aluminum 45U rack. Aw, come on, this is not even a fair fight.. -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hahn at physics.mcmaster.ca Sat Dec 3 22:24:46 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sat, 3 Dec 2005 17:24:46 -0500 (EST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133459842.5126.41.camel@bert64.oviedo.smithconcepts.com> Message-ID: > As does my sidebar on "Near-line Enterprise Disk" here: > http://www.samag.com/documents/sam0509a/0509a_s1.htm can you cite your source for the statement that MTBF is .4 Mhours? I know that was a common figure for desktop drives from 8-10 years ago, but I do also remember seeing a number of ATA disks that claimed MTBF's around 1Mhour. I'm guessing you just figured that 1.4 Mhour * (10*5)/(24*7) = .417 Mhour this is not an unreasonable guess, based on the assumption that a vendor would try to carefully minimize the MTBF given the current 3-5 year warranty periods, which are the same as "enterprise" products, but assuming a different duty cycle. I expect return rates are higher than this calculation would lead you to expect though, and therefore vendors would either try to get away with lower warranty periods or actually bump up the non-enterprise MTBF to reduce RMA costs. but it would be nice to have even a hint of actual, factual MTBF's. From b.j.smith at ieee.org Sun Dec 4 01:24:42 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 03 Dec 2005 20:24:42 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: References: Message-ID: <1133659482.5010.60.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-03 at 17:24 -0500, Mark Hahn wrote: > can you cite your source for the statement that MTBF is .4 Mhours? You can assume I'm pulling it out of my ass if you like. ;-> First off, it's commonly known that "commodity" drives are rated for 50,000 restarts @ 8 hours/start, resulting in the 400,000 hours MTBF. Although there have been improvements in thermal tolerances (e.g., 40C to 60C and vibration (typically 2-3x) in commodity drives over the last few years, the MTBF ratings have _not_ improved. That's why the 50,000 restarts number _still_ exists in _all_ commodity drives these days, including the latest Seagate Barracuda 7200.9 line and the Hitachi Deskstar 7K500 line. Secondly, the new generation of tolerance-tested "commodity" drives are rated for 1,000,000 hours MTBF. Still the same 50,000 restarts, but in a "network managed 24x7" environment with better cooling and cycling (the most detrimental aspects of a desktop) than a desktop. These include the Seagate NL35 line, which rolls off the same technology lines as the previous Seagate Barracuda 7200.8. Now I haven't checked if a new NL series is available matching the 7200.9, but typically they lag giving the cutting edge line time to mature and find ways of guaranteeing effective tolerance testing for identifying candidates for the NL model. Lastly, pull up just about _any_ technical specification for _any_ Hitachi, Seagate, etc... "commodity" hard drive. IBM and, now, Hitachi actually has some models (namely many Desktar) that state the warranty is invalid if the drive is operated more than 14 hours/day -- a post-75GXP policy. (I'm sure they can't hold consumers to that, but they can system integrators who sell their Deskstar models in systems designed for 24x7 operation). Now that's all > I know that was a common figure for desktop drives from 8-10 years ago, First off, the 0.4M and 1.4M number is extremely recent -- last 3 years. Secondly, the 0.4M number for desktop drives was _only_ when the drive's ambient did _not_ exceed 40C until just recently. The new crop of "commodity" mechanics developed and deployed in just the last 12-18 months can now take up to 60C ambient. That radically improves reliability. Third, the 1.4M number is for the "enterprise" drives which have _always_ been rated for 55+C ambient -- pretty much standard in the last 8+ years -- and have drastically reduced vibration. I.e., the platters are not cutting-edge density, but high spindle which often dictates far more refined assembly (with associated cost). Lastly, have you ever seen the "real world" number of desktop hard drive replacements on the _latest_ systems that are allegedly the new 60C rated material? Just this past year -- with _new_ systems -- at Boeing in St. Louis, we saw over a 20% failure rate in just 3 months, over a 30% failure rate in 6 months, with Dell and HP systems used by engineers. A colleague of mine with Disney's POS division in Orlando -- just this past year -- has seen almost a 40% failure rate in 6 months. Now one could say that a retail box might have a lower failure rate than a tier-1 PC OEM desktop box. One might also say that engineering and POS systems are more heavily used. But even then, those rates of failure do _not_ bode well for the 400,000 hour MTBF number average! They are dying well in advance of 50,000 restarts, even when the systems are never run more than 14 hours/day. > but I do also remember seeing a number of ATA disks that claimed MTBF's > around 1Mhour. > I'm guessing you just figured that > 1.4 Mhour * (10*5)/(24*7) = .417 Mhour No, hit just about _any_ technical specification, 50,000 restarts. Most vendors do _not_ list the "raw" MTBF on commodity disks anymore, as they want to make _no_ guarantees! If you get into the nitty-gritty of their warranty policies, there's little they can do with regards to consumers. But when it comes to system integrators and vertical applications -- where their products can be proven to require more than 50,000 @ 8 hours/day -- they have very, very strict replacement policies. Which is where the new Seagate NL series, Western Digital Caviar RE, etc... all come in. > but it would be nice to have even a hint of actual, factual MTBF's. Again, assume I'm pulling them out of my ass. ;-> Reality: Outside of the 1.4M for "enterprise" drives, and 1.0M for the new crop of "Near Line" rated, tolerance-tested commodity drives, you will _not_ find _any_ guarantee of MTBF. Only 50,000 restarts. ;-> -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From hahn at physics.mcmaster.ca Sun Dec 4 21:48:23 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sun, 4 Dec 2005 16:48:23 -0500 (EST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133659482.5010.60.camel@bert64.oviedo.smithconcepts.com> Message-ID: > > can you cite your source for the statement that MTBF is .4 Mhours? > > You can assume I'm pulling it out of my ass if you like. ;-> > > First off, it's commonly known that "commodity" drives are rated for > 50,000 restarts @ 8 hours/start, resulting in the 400,000 hours MTBF. I was asking for something other than hearsay. yes, I'm familiar with the prevalent rating of 50K power cycles - it's right there in the drive specs. I'm looking for a hard reference to an MTBF. > Secondly, the new generation of tolerance-tested "commodity" drives are > rated for 1,000,000 hours MTBF. again, where do you get this number? > Lastly, pull up just about _any_ technical specification for _any_ > Hitachi, Seagate, etc... "commodity" hard drive. IBM and, now, Hitachi no. I stopped being able to find MTBF mentioned in Maxtor/Seagate/WD drives at least 5 years ago. HGST nee IBM did used to publish the numbers, but it doesn't appear in the current 7K500 spec, and I think it disappeared around the 250G generation. > > I know that was a common figure for desktop drives from 8-10 years ago, > > First off, the 0.4M and 1.4M number is extremely recent -- last 3 years. Maxtor et all stopped publishing MTBF's well over 3 years ago, and the number was 3-500K hours at that time. the 1.4M number is obviously still being published in "enterprise" product lines. > > but it would be nice to have even a hint of actual, factual MTBF's. > > Again, assume I'm pulling them out of my ass. ;-> OK. it would be polite to offer such disclaimers at the top of your messages. From b.j.smith at ieee.org Sun Dec 4 23:38:36 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sun, 04 Dec 2005 18:38:36 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: References: Message-ID: <1133739516.5010.183.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 16:48 -0500, Mark Hahn wrote: > I was asking for something other than hearsay. yes, I'm familiar with > the prevalent rating of 50K power cycles - it's right there in the drive > specs. I'm looking for a hard reference to an MTBF. No hard drive vendor will commit to even 400,000 hours MTBF anymore. Now doesn't that tell you something?! > again, where do you get this number? Seagate NL35: 1,000,000 hours MTBF [ The enterprise version of Seagate's Barracuda ATA ] http://www.seagate.com/products/enterprise/nl35.html Western Digital Caviar RE: 1,000,000 hours MTBF [ The enterprise version of Western Digital's Caviar SE ] http://www.westerndigital.com/en/products/Products.asp?DriveID=114 I can get you more if you wish. ;-> > no. I stopped being able to find MTBF mentioned in Maxtor/Seagate/WD > drives at least 5 years ago. HGST nee IBM did used to publish the numbers, > but it doesn't appear in the current 7K500 spec, and I think it disappeared > around the 250G generation. You obviously aren't looking hard enough. > Maxtor et all stopped publishing MTBF's well over 3 years ago, and the > number was 3-500K hours at that time. the 1.4M number is obviously still > being published in "enterprise" product lines. Yes, and they are _not_ commodity capacities, but the 18, 36, 73 and 146GB capacities. > OK. it would be polite to offer such disclaimers at the top of your > messages. Trust takes years of providing continually sound technical information, so I know that won't happen overnight. Until then, feel free to assume I pull things right out of my ass. But one thing you'll note about me is that I'm the first person to admit when I'm wrong. Those lists I've been on for 5+ years know this, and it's a rather humorous moment for most when I actually get something wrong -- especially when I'm the first to admit it. ;-> Until then, got any Preparation-H because I have a feeling that you'll be in my ass for quite awhile. ;-> -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From loony at loonybin.org Sun Dec 4 23:50:09 2005 From: loony at loonybin.org (Peter Arremann) Date: Sun, 4 Dec 2005 18:50:09 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133739516.5010.183.camel@bert64.oviedo.smithconcepts.com> References: <1133739516.5010.183.camel@bert64.oviedo.smithconcepts.com> Message-ID: <200512041850.10119.loony@loonybin.org> On Sunday 04 December 2005 18:38, Bryan J. Smith wrote: > Seagate NL35: 1,000,000 hours MTBF > [ The enterprise version of Seagate's Barracuda ATA ] > > http://www.seagate.com/products/enterprise/nl35.html > > Western Digital Caviar RE: 1,000,000 hours MTBF > [ The enterprise version of Western Digital's Caviar SE ] > > http://www.westerndigital.com/en/products/Products.asp?DriveID=114 > > I can get you more if you wish. ;-> Yes please since in previous posts you disregard stuff by others and always nicely made sure they realize you talk about "commodity" disks... and now you refer to numbers for "enterprise versions" ...? Please either let this thread die (after all, it has been off topic for about a millenium now) or if you insist on proving your point answer the questions as they were asked and not with data that contradicts statements you previously made yourself. Peter. From b.j.smith at ieee.org Mon Dec 5 00:10:31 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sun, 04 Dec 2005 19:10:31 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <200512041850.10119.loony@loonybin.org> References: <1133739516.5010.183.camel@bert64.oviedo.smithconcepts.com> <200512041850.10119.loony@loonybin.org> Message-ID: <1133741431.5010.212.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 18:50 -0500, Peter Arremann wrote: > Yes please since in previous posts you disregard stuff by others and always > nicely made sure they realize you talk about "commodity" disks... and now you > refer to numbers for "enterprise versions" ...? These *ARE* "commodity" disk lines!!! They come of the _exact_ same lines as their desktop versions -- _exact_ same capacities, density, etc...! The Seagate NL35 _are_ the Seagate Barracuda 7200.8! The Western Digital Caviar RE _are_ the Western Digital Caviar SE! They roll off the _same_ lines! These are _not_ 18, 36, 73 or 146GB "enterprise" capacities, but 100, 160, 200, 250, 300, 320 and even 400GB "commodity" capacities! These are those same leading-edge capacities, at the same fab levels, maybe 1 series removed from the latest (e.g., Seagate now has a Barracuda 7200.9 that go up to 125-133GB/platter, 500GB capacity). You asked me where I got the 400,000 and 1,000,000 hour MTBFs from. I told you no one offers the 400,000 hour MTBF on the desktop devices anymore, but you _could_ find the 1,000,000 hour MTBF number on the new crop of "near-line, 24x7 network managed" commodity disk capacities. > Please either let this thread die (after all, it has been off topic for about > a millenium now) or if you insist on proving your point answer the questions > as they were asked and not with data that contradicts statements you > previously made yourself. Huh?!?!?! What data "contradicts" what I presented before? The vendors have taken their "commodity" capacity drives and offered a new set of 1,000,000 hour MTBF versions that test to higher tolerances. And you pay a premium for them! But they are _not_ the "enterprise" capacity drives with 1,400,000 hour MTBF drives. Those come off entirely different lines! Different platter densities _entirely_! -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Mon Dec 5 00:28:20 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sun, 04 Dec 2005 19:28:20 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133741431.5010.212.camel@bert64.oviedo.smithconcepts.com> References: <1133739516.5010.183.camel@bert64.oviedo.smithconcepts.com> <200512041850.10119.loony@loonybin.org> <1133741431.5010.212.camel@bert64.oviedo.smithconcepts.com> Message-ID: <1133742500.5010.226.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 19:10 -0500, Bryan J. Smith wrote: > The Seagate NL35 _are_ the Seagate Barracuda 7200.8! > The Western Digital Caviar RE _are_ the Western Digital Caviar SE! > They roll off the _same_ lines! > ... > The vendors have taken their "commodity" capacity drives and offered a > new set of 1,000,000 hour MTBF versions that test to higher tolerances. > And you pay a premium for them! Let's make one thing clear here, if the vendors are charging a premium for a 1,000,000 hour MTBF version of a commodity disk, then you are definitely _not_ getting anything _near_ 1,000,000 hour MTBF when you buy the cheaper OEM/retail version. I thought some of you could put 2 and 2 together, but apparently you cannot. The current structure continues to be, as I *ALWAYS* said ... Commodity OEM/Retail Standard: 400,000 hours MTBF Commodity, Enterprise Tolerance: 1,000,000 hours MTBF Enterprise: 1,400,000 hours MTBF In the last few years, the ambient tolerance of the commodity capacity has gone up from 40C to 60C. Enterprise capacity has always been 55C. If you look through spec sheets of various vendors, you will find these numbers too. I also note that some vendors are now claiming 1,200,000 hours MTBF on some of their newer, commodity capacity, enterprise tested tolerance drives. E.g., the Western Digital Caviar RE2. > But they are _not_ the "enterprise" capacity drives with 1,400,000 hour > MTBF drives. Those come off entirely different lines! Different > platter densities _entirely_! BTW, I _never_ said the vendor didn't call the 1,000,000 hour a "commodity" disk. I mean, if they said it's a "commodity" disk, why would you pay extra for it? ;-> But make no mistake, they are the same "commmodity" disk capacities. They are _not_ the "enterprise" disk capacities that are 18, 36, 73 and 146GB to date. With that said ... Now Peter, do you want to tell the group about our little "history"? Or do you like to just challenge me on every list we're on, and the countless times I've sent you detailed information on various things with regards to how the AMD64 technology works over on the CentOS list (e.g., 48-bit PAE mode -- which you were so ignorant of you looked like an ass last time we went around ;-)? -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From loony at loonybin.org Mon Dec 5 01:05:20 2005 From: loony at loonybin.org (Peter Arremann) Date: Sun, 4 Dec 2005 20:05:20 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133742500.5010.226.camel@bert64.oviedo.smithconcepts.com> References: <1133741431.5010.212.camel@bert64.oviedo.smithconcepts.com> <1133742500.5010.226.camel@bert64.oviedo.smithconcepts.com> Message-ID: <200512042005.20597.loony@loonybin.org> On Sunday 04 December 2005 19:28, you wrote: > Now Peter, do you want to tell the group about our little "history"? Or > do you like to just challenge me on every list we're on, and the > countless times I've sent you detailed information on various things > with regards to how the AMD64 technology works over on the CentOS list > (e.g., 48-bit PAE mode -- which you were so ignorant of you looked like > an ass last time we went around ;-)? Bryan, I was hoping your mom had tought you basic manners. Too bad I seem to be wrong. No matter how much you wish it was ok - if you can not make your point with arguments, name calling is NOT an acceptable backup method. And just the point that you use the words "48-bit PAE mode" shows that you eventually realized that I was right. Thank you that you stopped using works like PAE52 that you made up yourself and now use the standard terms. -- I will not respond to anything that I said above -- so for those that care and want to form their own opinion the thread is here: http://lists.centos.org/pipermail/centos/2005-June/007544.html (Hope I won't get in trouble for posting a centos mailing list link on a fedora list) But to stay on the off-topic topic: I fail to see how the fact that a disk has a multiple of xxx capacity makes it better than another that has multiple of yyy GB? I'm also confused what the 55 and 60 degree is supposed to be? The reason for my confusion is that the high end Fujitsu enterprise scsi or fibre disk is 60C and 300GB - does that mean its a comodity disk rather than an enterprise drive? Maybe those things are just that way cause they always were - and have actually nothing to do with the quality or design of the drives? Peter. From b.j.smith at ieee.org Mon Dec 5 02:16:32 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sun, 04 Dec 2005 21:16:32 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <200512042005.20597.loony@loonybin.org> References: <1133741431.5010.212.camel@bert64.oviedo.smithconcepts.com> <1133742500.5010.226.camel@bert64.oviedo.smithconcepts.com> <200512042005.20597.loony@loonybin.org> Message-ID: <1133748992.5010.252.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 20:05 -0500, Peter Arremann wrote: > I was hoping your mom had tought you basic manners. Too bad I seem to be > wrong. No matter how much you wish it was ok - if you can not make your point > with arguments, name calling is NOT an acceptable backup method. Peter, you do this on every list you're on with me. > And just the point that you use the words "48-bit PAE mode" shows that you > eventually realized that I was right. Thank you that you stopped using works > like PAE52 that you made up yourself and now use the standard terms. [ SIDE NOTE: I meant 52-bit PAE mode (48-bit is the virtual addressing). ] I'm NOT the first person to differentiate and abbreviate PAE as PAE36 and PAE52. I've seen AMD engineers use that too in their slides. In fact, if you look around, I've seen it stated as "PA52" as well. Some put spaces in between. Some put dashes. In a nutshell, most people don't talk about it on the Internet, largely because they are ignorant -- but it's clearly in the AMD manuals. You don't have to know that AMD's PAE mode is 52-bit from a programmer's standpoint, you just have to know PAE needs to be enabled. This _will_ become important when AMD breaks the 48-bit "address barrier" in their next 64-bit design though. That too is in the AMD programmer's manuals. > -- I will not respond to anything that I said above -- > so for those that care and want to form their own opinion the thread is here: > http://lists.centos.org/pipermail/centos/2005-June/007544.html > (Hope I won't get in trouble for posting a centos mailing list link on a > fedora list) The thing is that you nit-pick from a standpoint of ignorance. PA[E] 52-bit is the current 64-bit mode that current AMD processors use. It is PA[E] 36-bit compatible, but legacy PAE capable processors do not use it. That's why I differentiate from them. > But to stay on the off-topic topic: > I fail to see how the fact that a disk has a multiple of xxx capacity makes it > better than another that has multiple of yyy GB? Dude, they roll off _different_ assembly lines using _different_ fabs. That's why they have _different_ MTBFs too. > I'm also confused what the 55 and 60 degree is supposed to be? Sigh -- go read some product specification guides! And go read some AMD manuals while you're at it! > The reason for my confusion is that the high end Fujitsu enterprise scsi or > fibre disk Remember, interface does _not_ define reliability (that was an original point of this thread ;-). > is 60C and 300GB - does that mean its a comodity disk rather than > an enterprise drive? Give me a model -- or at least the series -- so I can look up the specifications. I can tell you more then! I'm only given you the industry typical values, which do seem to match across Seagate, Hitachi and Western Digital (who uses Hitachi as a fab for most products). Maxtor seems to be about the same as well. Not sure about Fujitsu. But yes, given that I've yet to see a 300GB 1" model with today's "enterprise" platter, mechanical and spindle (10Krpm) technology, it's probably safe to assume such. But I'd rather not assume. I'd rather get the model or at least the series. ;-> > Maybe those things are just that way cause they always were - and have > actually nothing to do with the quality or design of the drives? The reference to the change in operating temperature of commodity was in reference to the recent improvements in low-cost fluid and other technology now in use. Before about 2003, most commodity drives were rated at 40C. Now most are rated for 60C. Enterprise capacity drives have typically been rated for around 55C operation since the '90s -- since 10Krpm became available. Now how far are we going to argue about this? Just like the AMD thread? A thread that had you nit-pick on every detail? A thread that never saw you stop challenging me on just about everything? That's why I've adopted the "feel free to assume I'm pulling it out of my ass" statement. Because at some point I realize you don't want an answer, you want to argue. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Mon Dec 5 02:23:53 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sun, 04 Dec 2005 21:23:53 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <200512042005.20597.loony@loonybin.org> References: <1133741431.5010.212.camel@bert64.oviedo.smithconcepts.com> <1133742500.5010.226.camel@bert64.oviedo.smithconcepts.com> <200512042005.20597.loony@loonybin.org> Message-ID: <1133749433.5010.257.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 20:05 -0500, Peter Arremann wrote: > -- I will not respond to anything that I said above -- > so for those that care and want to form their own opinion the thread is here: > http://lists.centos.org/pipermail/centos/2005-June/007544.html > (Hope I won't get in trouble for posting a centos mailing list link on a > fedora list) BTW, this is how much you nit-pick things to death ... http://lists.centos.org/pipermail/centos/2005-June/007872.html I like to differentiate whether or not the processor, when in PAE mode, is using 52-bit (x86-64/IA-32e) or 36-bit (x86/IA-32). From the standpoint of a programmer writing user-space programs, it's the exact same. I have received several complements from engineers saying for those writing OSes and compilers, it would have been nice for some manuals to use that nomenclature. Instead, they just refer to it collectively as "PAE" -- even though page tables are _radically_different_ for the 2 modes. ;-> -- Bryan P.S. I'm sorry, but I have a tendency to interject new concepts in the engineering community when they are needed. Forgive me to get approval from you in the future -- especially when you are so ignorant about something like you were on AMD x86-64. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From markworkman at woh.rr.com Mon Dec 5 02:54:17 2005 From: markworkman at woh.rr.com (mark workman) Date: Sun, 04 Dec 2005 21:54:17 -0500 Subject: amd64-list Digest, Vol 22, Issue 5 In-Reply-To: <20051205021256.4F520738D0@hormel.redhat.com> References: <20051205021256.4F520738D0@hormel.redhat.com> Message-ID: <1133751257.8121.19.camel@localhost.localdomain> http://www.westerndigital.com/en/products/Products.asp?DriveID=158&Language=en We have used this disk for disk to disk backups and intend to use more. How would these compare quality wise in the discussion so far. Are they commodity drives? Are we putting ourselves at risk using a drive like this for long term storage? Thanks for any input. mw From loony at loonybin.org Mon Dec 5 02:59:13 2005 From: loony at loonybin.org (Peter Arremann) Date: Sun, 4 Dec 2005 21:59:13 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133748992.5010.252.camel@bert64.oviedo.smithconcepts.com> References: <200512042005.20597.loony@loonybin.org> <1133748992.5010.252.camel@bert64.oviedo.smithconcepts.com> Message-ID: <200512042159.13996.loony@loonybin.org> > > But to stay on the off-topic topic: > > I fail to see how the fact that a disk has a multiple of xxx capacity > > makes it better than another that has multiple of yyy GB? > > Dude, they roll off _different_ assembly lines using _different_ fabs. > That's why they have _different_ MTBFs too. Different assembly lines - yes... Never argued that point of different assembly lines. You however said that the enterprise drives are all multiples of 9Gb in capacity and rated for 55C. Hence my "why would that make a difference" statement. > > I'm also confused what the 55 and 60 degree is supposed to be? > > Sigh -- go read some product specification guides! > And go read some AMD manuals while you're at it! Woohoo - so proud of you - you mastered the basic concept of personal attacks - which puts you at the same level as a 4 year old or a german shepherd. > > The reason for my confusion is that the high end Fujitsu enterprise scsi > > or fibre disk > > Remember, interface does _not_ define reliability (that was an original > point of this thread ;-). That was not an argument but a sentence fragment used to specify the exact disk I was talking about so you would not be confused on which model I mean. > > is 60C and 300GB - does that mean its a comodity disk rather than > > an enterprise drive? > > Give me a model -- or at least the series -- so I can look up the > specifications. I can tell you more then! See above comment - Fujitsu has 1 model that is 300GB, scsi. that's also the only of their entierprise disks that has 60C specs - all others are 55C... > I'm only given you the industry typical values, which do seem to match > across Seagate, Hitachi and Western Digital (who uses Hitachi as a fab > for most products). Maxtor seems to be about the same as well. Not > sure about Fujitsu. Good - I have no issue with typical values - but neither capacity being a multiple of 9.1GB nor the 55C have anything to do with the reliability of the disk. > But yes, given that I've yet to see a 300GB 1" model with today's > "enterprise" platter, mechanical and spindle (10Krpm) technology, it's > probably safe to assume such. > > But I'd rather not assume. I'd rather get the model or at least the > series. ;-> Sorry - I will NOT point you to datasheets again. I've done that often enough and each time I did you found some reason on why Intel, AMD or IBM or whoever I had were wrong. Remember how intel's docs said 36bit bus and you said its 32? When Intel docs said 2 clock timing and you said they are wrong? When AMD docs said its not possible to get access to larger memory regions and you said its wrong? > > Maybe those things are just that way cause they always were - and have > > actually nothing to do with the quality or design of the drives? > > The reference to the change in operating temperature of commodity was in > reference to the recent improvements in low-cost fluid and other > technology now in use. Before about 2003, most commodity drives were > rated at 40C. Now most are rated for 60C. Correct - not at all related to my question above though. > Enterprise capacity drives have typically been rated for around 55C > operation since the '90s -- since 10Krpm became available. Correct but unfortunately again not related to what I said. Still the question I had is still unanswered even though you wrote so many lines... So let me ask again: That is all coincidence, right? The guys designing these disks are the same - hence they say "55C worked out well last time, lets do that again" Has nothing to do with reliability or anything else. > Now how far are we going to argue about this? Just like the AMD thread? Remember the time you said that you can't remember the AMD guys name nor find the old post - but then you somehow managed to email him? Oh - an amd email address always is first.lastname at amd.com? So how did you email him if you couldn't remember his name nor were able to find the post? (all in the public archives of the 6/2005 centos mailing list) > A thread that had you nit-pick on every detail? A thread that never saw > you stop challenging me on just about everything? Yes - as I said back then... I went down one step at the time to figure out when we would reach common ground. Unfortunately that never happened before my vacation was over and I had no more time waste. > That's why I've adopted the "feel free to assume I'm pulling it out of > my ass" statement. Because at some point I realize you don't want an > answer, you want to argue. Nope - I just spend an enormous amount of time trying to figure out what part of your statements are made up and what is true. Just like this statement. Yes, Enterprise disks are usually made on their own dedicated production lines. Yes, they used to come in multiples of 540MB, then later in multiples of 9.1GB. No - that capacity has nothing to do with their reliability. Yes, the WD RE drives are from the same lines as their other products. Yes, they are built with the same components. No, they are much more reliable because of the measurements they have to pass. Again, my offer stands. Let the AMD thread be - it has nothing to do with this argument. Also, if you do not personally attack me, I will not post a personal attack either. Lets argue with facts. Better even - lets take votes on who even cares to read this thread still and then just stop it? That was my original intention anyway. Peter. From loony at loonybin.org Mon Dec 5 03:12:01 2005 From: loony at loonybin.org (Peter Arremann) Date: Sun, 4 Dec 2005 22:12:01 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <1133749433.5010.257.camel@bert64.oviedo.smithconcepts.com> References: <200512042005.20597.loony@loonybin.org> <1133749433.5010.257.camel@bert64.oviedo.smithconcepts.com> Message-ID: <200512042212.02043.loony@loonybin.org> On Sunday 04 December 2005 21:23, you wrote: > BTW, this is how much you nit-pick things to death ... > http://lists.centos.org/pipermail/centos/2005-June/007872.html > P.S. I'm sorry, but I have a tendency to interject new concepts in the > engineering community when they are needed. Forgive me to get approval > from you in the future -- especially when you are so ignorant about > something like you were on AMD x86-64. *nods* yes... So the manufacturer of the device you talk says something different than you. So you're right, AMD is wrong and I'm nit-picking? You know, this really isn't worth my time. I'd recommend that you print the following lines out, in really big letters and frame it... I AM WRONG - ABSOLUTELY UNWORTHY OF BEING IN THE PRESENCE OF ONE GLORIOUS BRYAN "TheBS" SMITH. PLEASE FORGIVE MY ARGUING AND I'M SHIVERING IN FEAR AS I DO NOT WISH TO BE SUBJECT TO YOUR ETERNAL WRATH. And with that, lets drop it :-) Peter. From b.j.smith at ieee.org Mon Dec 5 05:03:50 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Mon, 05 Dec 2005 00:03:50 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <200512042212.02043.loony@loonybin.org> References: <200512042005.20597.loony@loonybin.org> <1133749433.5010.257.camel@bert64.oviedo.smithconcepts.com> <200512042212.02043.loony@loonybin.org> Message-ID: <1133759030.5010.262.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 22:12 -0500, Peter Arremann wrote: > I AM WRONG - ABSOLUTELY UNWORTHY OF BEING IN THE PRESENCE OF ONE GLORIOUS > BRYAN "TheBS" SMITH. PLEASE FORGIVE MY ARGUING AND I'M SHIVERING IN FEAR AS I > DO NOT WISH TO BE SUBJECT TO YOUR ETERNAL WRATH. It would be one thing if you want honest answers. But you typically don't, and I'm not the only person you do it to. I've put out all the technical information. Research it and find out if what I say is indeed true. I can't do much more than that, other than do to it over time. Trust is earned after years of making sound, technical statements. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Mon Dec 5 05:06:16 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Mon, 05 Dec 2005 00:06:16 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <200512042212.02043.loony@loonybin.org> References: <200512042005.20597.loony@loonybin.org> <1133749433.5010.257.camel@bert64.oviedo.smithconcepts.com> <200512042212.02043.loony@loonybin.org> Message-ID: <1133759176.5010.266.camel@bert64.oviedo.smithconcepts.com> On Sun, 2005-12-04 at 22:12 -0500, Peter Arremann wrote: > *nods* yes... So the manufacturer of the device you talk says something > different than you. So you're right, AMD is wrong and I'm nit-picking? AMD and Intel do not differentiate between 36-bit and 52-pin PAE modes in some of their documentation. They confusingly call them all PAE, or they will say PAE, and then say "36-bit" or "52-bit" or -- worse yet -- 32-bit mode or Long mode. They are distinctly two different modes of operation from the standpoint of the OS/memory manager. From the user-space, it's no different, because both use 48-bit pointers. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From cochranb at speakeasy.net Mon Dec 5 21:40:22 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Mon, 05 Dec 2005 16:40:22 -0500 Subject: Opteron Vs. Athlon X2 Message-ID: <4394B3C6.5050009@speakeasy.net> I'm Christmas shopping and in need of advice. I'm currently using a machine that has an Athlon 64 3500+ (single core) processor with an MSI K8N Neo2 Platinum motherboard. I'm using 2 Gb of Adata memory which is actually running on 2T timings. The motherboard was flashed with the v1.9 BIOS which upgrades the board to processor E6 CPU (I'm not familiar with the "E6" terminology.) Now I'm on the Monarch Computer website looking at Athlon X2 and Opteron processors, and I'm wondering which one would be a really good step up for me for the next 2 years -- preferably without having to get a new motherboard. Does an Opteron dual-core give me any advantage over the Athlon X2 besides a bigger L2 cache? Can I run Fedora Core 4 (and later on, Fedora Core 5) on the processor? Can I take an Opteron processor and drop it into my existing MSI motherboard, or should I expect to buy a new motherboard also? If so, what motherboard? What would be better for a developer -- the Athlon X2 or the Opteron? Thanks Bob Cochran Greenbelt, Maryland From loony at loonybin.org Tue Dec 6 00:35:34 2005 From: loony at loonybin.org (Peter Arremann) Date: Mon, 5 Dec 2005 19:35:34 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <4394B3C6.5050009@speakeasy.net> References: <4394B3C6.5050009@speakeasy.net> Message-ID: <200512051935.34685.loony@loonybin.org> On Monday 05 December 2005 16:40, Robert L Cochran wrote: > I'm Christmas shopping and in need of advice. > > I'm currently using a machine that has an Athlon 64 3500+ (single core) > processor with an MSI K8N Neo2 Platinum motherboard. I'm using 2 Gb of > Adata memory which is actually running on 2T timings. The motherboard > was flashed with the v1.9 BIOS which upgrades the board to processor E6 > CPU (I'm not familiar with the "E6" terminology.) E6 stepping has better memory timing capability (400 with a higher number of memory banks) and improved power saving. E3s overclock better though :-D > Now I'm on the Monarch Computer website looking at Athlon X2 and Opteron > processors, and I'm wondering which one would be a really good step up > for me for the next 2 years -- preferably without having to get a new > motherboard. The board you've got is a 939 pin socket and will not accept dual core cpus or opterons (socket 940). The highest you can get is a FX-57 but that's big bucks for not much more performance. > Does an Opteron dual-core give me any advantage over the Athlon X2 > besides a bigger L2 cache? Not really that much. In fact, because of the slower memory timing of socker 940 opterons (registered memory adds latency) the performance improvements from the larger cache are mostly negated according to our tests. > Can I run Fedora Core 4 (and later on, Fedora Core 5) on the processor? Yes. No issues. > Can I take an Opteron processor and drop it into my existing MSI > motherboard, or should I expect to buy a new motherboard also? If so, > what motherboard? No - Socket 940 (most opterons) and 939 (some 1xx opterons and Athlon64) are not compatible. The 939 1xx opterons that are out now are nothing more than a relabeled Athlon64 X2 anyway. > What would be better for a developer -- the Athlon X2 or the Opteron? I would recommend the X2 over two 2xx single core opterons. You can reuse your memory if you choose to. You will save yourself money on CPU and memory. Your motherboard will be cheaper. The opterons will be louder because of the addl. fans. If you're thinking of a single 1xx 939 opteron you might have a hard time finding a board with support for it (two months ago I could barely find any boards that were certified for it). It will run anyway just fine on 99% of all 939 boards that work with the equivalent X2. On the other side, if you go for dual 2xx opterons and you pay extra for a good board, you get a huge improvement on IO. Multiple PCI-X busses and the like are nice to have on most servers but for a developers workstation it doesn't really matter. Enjoy your chrismas shopping, Peter. From cochranb at speakeasy.net Tue Dec 6 01:16:03 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Mon, 05 Dec 2005 20:16:03 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <200512051935.34685.loony@loonybin.org> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> Message-ID: <4394E653.6060203@speakeasy.net> Thanks, Peter. I have a lot of thinking to do about this. Bob Peter Arremann wrote: >On Monday 05 December 2005 16:40, Robert L Cochran wrote: > > >>I'm Christmas shopping and in need of advice. >> >>I'm currently using a machine that has an Athlon 64 3500+ (single core) >>processor with an MSI K8N Neo2 Platinum motherboard. I'm using 2 Gb of >>Adata memory which is actually running on 2T timings. The motherboard >>was flashed with the v1.9 BIOS which upgrades the board to processor E6 >>CPU (I'm not familiar with the "E6" terminology.) >> >> >E6 stepping has better memory timing capability (400 with a higher number of >memory banks) and improved power saving. E3s overclock better though :-D > > > > >>Now I'm on the Monarch Computer website looking at Athlon X2 and Opteron >>processors, and I'm wondering which one would be a really good step up >>for me for the next 2 years -- preferably without having to get a new >>motherboard. >> >> >The board you've got is a 939 pin socket and will not accept dual core cpus or >opterons (socket 940). The highest you can get is a FX-57 but that's big >bucks for not much more performance. > > > >>Does an Opteron dual-core give me any advantage over the Athlon X2 >>besides a bigger L2 cache? >> >> >Not really that much. In fact, because of the slower memory timing of socker >940 opterons (registered memory adds latency) the performance improvements >from the larger cache are mostly negated according to our tests. > > > >>Can I run Fedora Core 4 (and later on, Fedora Core 5) on the processor? >> >> >Yes. No issues. > > > >>Can I take an Opteron processor and drop it into my existing MSI >>motherboard, or should I expect to buy a new motherboard also? If so, >>what motherboard? >> >> >No - Socket 940 (most opterons) and 939 (some 1xx opterons and Athlon64) are >not compatible. The 939 1xx opterons that are out now are nothing more than a >relabeled Athlon64 X2 anyway. > > > >>What would be better for a developer -- the Athlon X2 or the Opteron? >> >> >I would recommend the X2 over two 2xx single core opterons. You can reuse your >memory if you choose to. You will save yourself money on CPU and memory. Your >motherboard will be cheaper. The opterons will be louder because of the addl. >fans. >If you're thinking of a single 1xx 939 opteron you might have a hard time >finding a board with support for it (two months ago I could barely find any >boards that were certified for it). It will run anyway just fine on 99% of >all 939 boards that work with the equivalent X2. > >On the other side, if you go for dual 2xx opterons and you pay extra for a >good board, you get a huge improvement on IO. Multiple PCI-X busses and the >like are nice to have on most servers but for a developers workstation it >doesn't really matter. > >Enjoy your chrismas shopping, > >Peter. > > > From b.j.smith at ieee.org Tue Dec 6 03:42:14 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Mon, 05 Dec 2005 22:42:14 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <200512051935.34685.loony@loonybin.org> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> Message-ID: <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> On Mon, 2005-12-05 at 19:35 -0500, Peter Arremann wrote: > E6 stepping has better memory timing capability (400 with a higher number of > memory banks) Huh? All Socket-939/940 processors have 4x 32-bit banks (2x 64-bit DDR channels). FYI, Socket-754 + 184 (2nd DDR channel traces) = 938. ;-> > and improved power saving. E3s overclock better though :-D > The board you've got is a 939 pin socket and will not accept dual core cpus or > opterons (socket 940). FYI, the new dual-core Opteron 165 and 175 are Socket-939. They use unregistered DDR SDRAM, just like any other Socket-939 processor. > Not really that much. In fact, because of the slower memory timing of socker > 940 opterons (registered memory adds latency) the performance improvements > from the larger cache are mostly negated according to our tests. Agreed, but if you're going with a Socket-939 Opteron 1xx, then that's not an issue. > No - Socket 940 (most opterons) and 939 (some 1xx opterons and Athlon64) are > not compatible. The 939 1xx opterons that are out now are nothing more than a > relabeled Athlon64 X2 anyway. Huh? Not exactly true, especially since some are E5 and others are E6. And there _are_ the thermal differences. > I would recommend the X2 over two 2xx single core opterons. You can reuse your > memory if you choose to. You will save yourself money on CPU and memory. Your > motherboard will be cheaper. The opterons will be louder because of the addl. > fans. Not if you get the "HE" single core versions that use only 55W. The Opteron 246HE is virtually _no_ premium over the older 246. The Opteron 250+ are the newer steppings. You'll have twice the number of memory channels and width. But from a cost perspective, I agree. > If you're thinking of a single 1xx 939 opteron you might have a hard time > finding a board with support for it (two months ago I could barely find any > boards that were certified for it). It will run anyway just fine on 99% of > all 939 boards that work with the equivalent X2. Agreed. The value of the Socket-939 Opteron 1xx is really the new, entry-level server mainboards built for it. We've talked about the SuperMicro here. > On the other side, if you go for dual 2xx opterons and you pay extra for a > good board, you get a huge improvement on IO. Multiple PCI-X busses and the > like are nice to have on most servers but for a developers workstation it > doesn't really matter. As long as you don't need more than 100MBps in disk and network. Otherwise, PCI-X is still much better because most desktop mainboards only ship with PCIe x1 channels outside of video. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From bill at cse.ucdavis.edu Tue Dec 6 03:58:48 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Mon, 5 Dec 2005 19:58:48 -0800 Subject: Opteron Vs. Athlon X2 In-Reply-To: <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> Message-ID: <20051206035848.GP27230@cse.ucdavis.edu> On Mon, Dec 05, 2005 at 10:42:14PM -0500, Bryan J. Smith wrote: > On Mon, 2005-12-05 at 19:35 -0500, Peter Arremann wrote: >> E6 stepping has better memory timing capability (400 with a higher number of >> memory banks) > > Huh? All Socket-939/940 processors have 4x 32-bit banks (2x 64-bit DDR > channels). FYI, Socket-754 + 184 (2nd DDR channel traces) = 938. ;-> On the older revision chips the memory bus would be down clocked if you had a larger number of banks, I don't remember the numbers exactly but something like 4 banks = PC2700 and 8 banks = PC2100. The E4/E6 maintain PC3200 for a larger number of banks. > > On the other side, if you go for dual 2xx opterons and you pay extra for a > > good board, you get a huge improvement on IO. Multiple PCI-X busses and the > > like are nice to have on most servers but for a developers workstation it > > doesn't really matter. > > As long as you don't need more than 100MBps in disk and network. > Otherwise, PCI-X is still much better because most desktop mainboards > only ship with PCIe x1 channels outside of video. Many of the s939 boards have single or dual GigE on the motherboard, even the midrange boards have pci-e 2x (like the MSI K8N neo4) AND dual GigE. All the SLI boards (even the cheapie foxconn for $120) supports a x8 slot for fancy RAID or networking if you need it. Getting a second 16x slot is possible but does add a significant premium (around $100). So I find it hard to believe where any workstation would need PCI-X unless there's some strange pci-x only card required. 8 SATA ports are also relatively easy to come by on the motherboard. What I would consider is that for CPU intensive use whether you need a second memory bus or not, if your running 2 cache-unfriendly jobs a second socket (assuming a 4+4 memory system) can double the throughput when compared to a single socket system. Alas, it also incurs a substantial cost, size, and heat penalty. Personally I'd get a single socket, pci-e, and a dual core if the price point made sense and I planned to be running > 1 CPU intensive job. Of course there are many workloads that would justify different configurations. -- Bill Broadley Computational Science and Engineering UC Davis From loony at loonybin.org Tue Dec 6 04:41:13 2005 From: loony at loonybin.org (Peter Arremann) Date: Mon, 5 Dec 2005 23:41:13 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> Message-ID: <200512052341.14169.loony@loonybin.org> On Monday 05 December 2005 22:42, Bryan J. Smith wrote: > FYI, the new dual-core Opteron 165 and 175 are Socket-939. They use > unregistered DDR SDRAM, just like any other Socket-939 processor. Please read the whole email before posting FYIs like that... I'm well aware of the opteron 1xx for the socket 939 as you see below. > > Not really that much. In fact, because of the slower memory timing of > > socker 940 opterons (registered memory adds latency) the performance > > improvements from the larger cache are mostly negated according to our > > tests. > > Agreed, but if you're going with a Socket-939 Opteron 1xx, then that's > not an issue. > > > No - Socket 940 (most opterons) and 939 (some 1xx opterons and Athlon64) > > are not compatible. The 939 1xx opterons that are out now are nothing > > more than a relabeled Athlon64 X2 anyway. > > Huh? Not exactly true, especially since some are E5 and others are E6. > And there _are_ the thermal differences. They come off the same production line and the difference is the cpu identifier used when finalizing the packaging after the cpus have been binned. Thermal envelopes are determined by measuring the quality of the die and are therefore easily explained with more stringent requirements when selecting the die used. That is the same concept AMD uses for their HE models. > > On the other side, if you go for dual 2xx opterons and you pay extra for > > a good board, you get a huge improvement on IO. Multiple PCI-X busses and > > the like are nice to have on most servers but for a developers > > workstation it doesn't really matter. > > As long as you don't need more than 100MBps in disk and network. > Otherwise, PCI-X is still much better because most desktop mainboards > only ship with PCIe x1 channels outside of video. Why "in disk and network" ? We're talking workstation here, so its to assume that onboard controllers (pata or sata) are being used. In that case, the choice is pretty much down to Via and NVidia - both of which I thought bypass the pci bottleneck in their chipset designs? Peter. From cochranb at speakeasy.net Tue Dec 6 05:21:47 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Tue, 06 Dec 2005 00:21:47 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <200512052341.14169.loony@loonybin.org> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> <200512052341.14169.loony@loonybin.org> Message-ID: <43951FEB.9050008@speakeasy.net> Thanks Peter, Bryan, and Bill for your thoughts. I would like to keep to a budget of about USD $600-700 for a CPU upgrade. I want to both develop and use open source software, which means a lot of code-compile-test cycles. I want the compiles to finish quickly. For example, PHP 6.0 (from snaps.php.net) takes about 4-5 minutes to compile on my single core Athlon 64 3500+, and I'd like to cut the compile time in half. I also want to do web development with PHP and databases. I want to be able to keep up with the current CPUs and get exposure to them. With these goals in mind what hardware will give me what I want and fit inside that $700? What do you think will work for me? I want to make use of my existing power supply, memory, and drives as much as possible. If I have to replace my motherboard, I'll consider it. So -- and I say this with humor! -- what can I ask my wife to give me for Christmas without generating heavy expense but still be good enough for me, a computer programmer who does a lot of development? Thanks Bob Cochran Peter Arremann wrote: >On Monday 05 December 2005 22:42, Bryan J. Smith wrote: > > >>FYI, the new dual-core Opteron 165 and 175 are Socket-939. They use >>unregistered DDR SDRAM, just like any other Socket-939 processor. >> >> >Please read the whole email before posting FYIs like that... I'm well aware of >the opteron 1xx for the socket 939 as you see below. > > > >>>Not really that much. In fact, because of the slower memory timing of >>>socker 940 opterons (registered memory adds latency) the performance >>>improvements from the larger cache are mostly negated according to our >>>tests. >>> >>> >>Agreed, but if you're going with a Socket-939 Opteron 1xx, then that's >>not an issue. >> >> >> >>>No - Socket 940 (most opterons) and 939 (some 1xx opterons and Athlon64) >>>are not compatible. The 939 1xx opterons that are out now are nothing >>>more than a relabeled Athlon64 X2 anyway. >>> >>> >>Huh? Not exactly true, especially since some are E5 and others are E6. >>And there _are_ the thermal differences. >> >> >They come off the same production line and the difference is the cpu >identifier used when finalizing the packaging after the cpus have been >binned. Thermal envelopes are determined by measuring the quality of the die >and are therefore easily explained with more stringent requirements when >selecting the die used. That is the same concept AMD uses for their HE >models. > > > > >>>On the other side, if you go for dual 2xx opterons and you pay extra for >>>a good board, you get a huge improvement on IO. Multiple PCI-X busses and >>>the like are nice to have on most servers but for a developers >>>workstation it doesn't really matter. >>> >>> >>As long as you don't need more than 100MBps in disk and network. >>Otherwise, PCI-X is still much better because most desktop mainboards >>only ship with PCIe x1 channels outside of video. >> >> >Why "in disk and network" ? We're talking workstation here, so its to assume >that onboard controllers (pata or sata) are being used. In that case, the >choice is pretty much down to Via and NVidia - both of which I thought bypass >the pci bottleneck in their chipset designs? > >Peter. > > > From loony at loonybin.org Tue Dec 6 05:36:59 2005 From: loony at loonybin.org (Peter Arremann) Date: Tue, 6 Dec 2005 00:36:59 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <43951FEB.9050008@speakeasy.net> References: <4394B3C6.5050009@speakeasy.net> <200512052341.14169.loony@loonybin.org> <43951FEB.9050008@speakeasy.net> Message-ID: <200512060036.59505.loony@loonybin.org> On Tuesday 06 December 2005 00:21, Robert L Cochran wrote: > Thanks Peter, Bryan, and Bill for your thoughts. > > I would like to keep to a budget of about USD $600-700 for a CPU > upgrade. I want to both develop and use open source software, which > means a lot of code-compile-test cycles. I want the compiles to finish > quickly. For example, PHP 6.0 (from snaps.php.net) takes about 4-5 > minutes to compile on my single core Athlon 64 3500+, and I'd like to > cut the compile time in half. I also want to do web development with PHP > and databases. I want to be able to keep up with the current CPUs and > get exposure to them. you're unhappy with 5 minutes? Now I'm suddenly happy with the users at work - they are happy with the 30 or more minutes it takes for our apps to build :-) > With these goals in mind what hardware will give me what I want and fit > inside that $700? What do you think will work for me? I want to make use > of my existing power supply, memory, and drives as much as possible. If > I have to replace my motherboard, I'll consider it. > > So -- and I say this with humor! -- what can I ask my wife to give me > for Christmas without generating heavy expense but still be good enough > for me, a computer programmer who does a lot of development? For me, reuse of components is the most important thing usually. The situation you're personally would go the following route - I know its not the greatest technical nor performance wise - but the price/performance for this upgrade can't be beat (it is also what I'm running, so I can tell you its rock solid) Start with the 939Dual board from Asrock (lowend asus). Its based around a SIS chipset and has the huge advantage that you can not only keep your memory but also the graphics card. Almost all boards that support dual core are pci express. This board also has a 16x PCI-Express slot, so you can later upgrade to such a card (or like me, run a dual head setup with a 8xAGP and a 16xPCI-Express card). In addition to that, the board is dirt cheap - less than $70. http://www.asrock.com/support/CPU_Support/show.asp?Model=939Dual-SATA2 It runs Centos4U1, FC3,4,5T1 without any issue. Centos 3.5 and Solaris 10 both didn't like the network card - but for a developers workstation you can just buy any old pci ethernet card without much thinking about it. Then take a Athlon 64 - X2 4400. Its less than $500 boxed. The next step up would be the X4600 but with only 200Mhz more and half the cache I doubt it will be worth the extra $130 you pay for it. That CPU/board combo should give you a nice performance boost since you got the same clock per core but dual core and twice the cache per core... You won't see half the compile time though because one of the slowest thing these days is linking - and that's always done single threaded. Peter. From jch at scalix.com Tue Dec 6 09:16:18 2005 From: jch at scalix.com (John Haxby) Date: Tue, 6 Dec 2005 09:16:18 +0000 Subject: Opteron Vs. Athlon X2 In-Reply-To: <43951FEB.9050008@speakeasy.net> References: <4394B3C6.5050009@speakeasy.net> Message-ID: <439556E2.2040002@scalix.com> Robert L Cochran wrote: > I would like to keep to a budget of about USD $600-700 for a CPU > upgrade. I want to both develop and use open source software, which > means a lot of code-compile-test cycles. I want the compiles to finish > quickly. For example, PHP 6.0 (from snaps.php.net) takes about 4-5 > minutes to compile on my single core Athlon 64 3500+, and I'd like to > cut the compile time in half. I also want to do web development with > PHP and databases. I want to be able to keep up with the current CPUs > and get exposure to them. When you're compiling is your CPU running at 100% (near enough)? Chances are it is, in which case you should benefit from an extra CPU so long as you run either "make -j2" or "cc -pipe" (or both). A while back when I had an two processor machine for development I found that "cc -pipe" ("make -j2" didn't work with the broken build system) cut compilation time dramatically and both CPUs were busy instead of just one of them. The other advantage of an dual CPU system, at least for software development, was that the autotests found several bugs that we simply hadn't seen before -- every one of them some sort of race condition it was easy to trigger when you can have two processes running at the same time. jch From lamont at gurulabs.com Tue Dec 6 09:31:40 2005 From: lamont at gurulabs.com (Lamont R. Peterson) Date: Tue, 6 Dec 2005 02:31:40 -0700 Subject: Opteron Vs. Athlon X2 In-Reply-To: <439556E2.2040002@scalix.com> References: <4394B3C6.5050009@speakeasy.net> <439556E2.2040002@scalix.com> Message-ID: <200512060231.45117.lamont@gurulabs.com> On Tuesday 06 December 2005 02:16am, John Haxby wrote: > Robert L Cochran wrote: > > I would like to keep to a budget of about USD $600-700 for a CPU > > upgrade. I want to both develop and use open source software, which > > means a lot of code-compile-test cycles. I want the compiles to finish > > quickly. For example, PHP 6.0 (from snaps.php.net) takes about 4-5 > > minutes to compile on my single core Athlon 64 3500+, and I'd like to > > cut the compile time in half. I also want to do web development with > > PHP and databases. I want to be able to keep up with the current CPUs > > and get exposure to them. > > When you're compiling is your CPU running at 100% (near enough)? > Chances are it is, in which case you should benefit from an extra CPU so > long as you run either "make -j2" or "cc -pipe" (or both). A while back > when I had an two processor machine for development I found that "cc > -pipe" ("make -j2" didn't work with the broken build system) cut > compilation time dramatically and both CPUs were busy instead of just > one of them. Actually, I find that I get a little better performance for a single proc machine with make -j2 ... anywhere from 5-20%, depending. On my dual Opteron box, I use -j3. Here is my "rule-of-thumb" for computing the number to pass with the -j switch: 1 for each processor + 1 for each system + 1 if using a distributed build. The thing is, the process of compiling several files requires more than just running the proc at full bore while compiling. There are lulls in processor utilization. These "valleys" (think of the looking at a scrolling processor utilization graph) get filled in by the "extra" process. Usually, they don't all want to do I/O or some other non-processor-intensive activities at the same time, so most of the otherwise wasted clock cycles in the middle get put to good use. Of course, this will mean a little more heat (if you could even measure it ?) during long builds. > The other advantage of an dual CPU system, at least for software > development, was that the autotests found several bugs that we simply > hadn't seen before -- every one of them some sort of race condition it > was easy to trigger when you can have two processes running at the same > time. Gotta love it! :) -- Lamont R. Peterson Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ] -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From apjaworski at mmm.com Tue Dec 6 17:34:04 2005 From: apjaworski at mmm.com (apjaworski at mmm.com) Date: Tue, 6 Dec 2005 11:34:04 -0600 Subject: MB advice Message-ID: Hi there, I am slowly getting ready to get a new system. I am running Linux (Fedora) almost exclusively but occasionally I need to boot Windows (XP). I am thinking about an Athlon 64 machine. I would like to put a couple of SATA drives in it with no legacy PATA disks. I would like to be able to install Fedora Core Linux on it with minimal fuss. Do you have any MB recommendations? I was just reading Jeff Garzik page on SATA support in Linux. It looks like his driver library supports Silicon Image 3124 chip well. Are there any MBs out there having this chipset on board? What about nForce4? A lot of MBs seem to use this chipset. I am specifically thinking about MSI K8N Neo4 Platinum. Any comments? Thanks in advance, Andy __________________________________ Andy Jaworski 518-1-01 Process Laboratory 3M Corporate Research Laboratory ----- E-mail: apjaworski at mmm.com Tel: (651) 733-6092 Fax: (651) 736-3122 From tweeks at rackspace.com Tue Dec 6 20:31:18 2005 From: tweeks at rackspace.com (tweeks) Date: Tue, 6 Dec 2005 14:31:18 -0600 Subject: Asus A8V Deluxe Can't see full 4GB Message-ID: <200512061431.18180.tweeks@rackspace.com> Hey Guys.. Disclaimer: I've googled the list... and didn't see anything... I've got a guy running an Asus A8V Deluxe w/Ath64 3000+ and 4 sticks of 1GB and his setup (with either RH-EL3 64bit or RH-EL4 64bit) seems to be exhibiting the classic x86/32bit problem of not being able to map the memory around the 512MB PCI memory/chipset mapping. I'm not sure about the 2.4 kernel, but I would think that the 2.6 64bit kernel (or the motherboard) could map that memory high (beyond the 32 bit range) to access all 4GB, but nothing stands out on how I should go about doing this (BIOS seems void of any such "settings", and I know of no OS/kernel (sysctl) parameters that can do this). And no.. Running PAE is NOT desireable.. This is a 64bit system.. why can't I either map-high, or around the PCI hardware limitation?! (and NO.. I don't want to add another 1GB to "get around it"). This guys wants 4GB.. Not pay for 5 get 4.5. Am I missing something here... or is this system simply NOT going to be able to get to that last PCI mapped 512MB? Anyone? Tweeks From b.j.smith at ieee.org Tue Dec 6 21:29:59 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 6 Dec 2005 13:29:59 -0800 (PST) Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <200512061431.18180.tweeks@rackspace.com> Message-ID: <20051206212959.56783.qmail@web34106.mail.mud.yahoo.com> tweeks wrote: > And no.. Running PAE is NOT desireable.. AMD's 48-bit addressing mode _requires_ PAE, 52-bit. But it doesn't take a performance hit like old PAE, 36-bit. > Am I missing something here... or is this system simply NOT > going to be able to get to that last PCI mapped 512MB? I think you're not setting up some things correctly. Although the map may use 3.5-4GiB for I/O, you should still have 4GiB usable (even if mapped to 4.5GiB). -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From loony at loonybin.org Tue Dec 6 23:07:17 2005 From: loony at loonybin.org (Peter Arremann) Date: Tue, 6 Dec 2005 18:07:17 -0500 Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <200512061431.18180.tweeks@rackspace.com> References: <200512061431.18180.tweeks@rackspace.com> Message-ID: <200512061807.17739.loony@loonybin.org> On Tuesday 06 December 2005 15:31, tweeks wrote: > Hey Guys.. > > Disclaimer: > I've googled the list... and didn't see anything... > > I've got a guy running an Asus A8V Deluxe w/Ath64 3000+ and 4 sticks of 1GB > and his setup (with either RH-EL3 64bit or RH-EL4 64bit) seems to be > exhibiting the classic x86/32bit problem of not being able to map the > memory around the 512MB PCI memory/chipset mapping. I'm not sure about the > 2.4 kernel, but I would think that the 2.6 64bit kernel (or the > motherboard) could map that memory high (beyond the 32 bit range) to access > all 4GB, but nothing stands out on how I should go about doing this (BIOS > seems void of any such "settings", and I know of no OS/kernel (sysctl) > parameters that can do this). > > And no.. Running PAE is NOT desireable.. This is a 64bit system.. why can't > I either map-high, or around the PCI hardware limitation?! (and NO.. I > don't want to add another 1GB to "get around it"). This guys wants 4GB.. > Not pay for 5 get 4.5. > > Am I missing something here... or is this system simply NOT going to be > able to get to that last PCI mapped 512MB? > > Anyone? > > Tweeks This is normal for many 939 PC boards. The A8V board description isself states right there in big bold letters that if you install 4GB you will only see a little over 3GB. Virtually all boards based on via or and nforce 3 chips have this issue. Most boards with NForce4 do as well but there are some exceptions. All NForce Professional boards I've seen are good. As are SIS. Since your motherboard description says its not possible to see all 4GB and its based on a VIA chipset you'll probably not be happy. Peter. From timothy.leblanc at gmail.com Wed Dec 7 01:51:56 2005 From: timothy.leblanc at gmail.com (Timothy LeBlanc) Date: Tue, 6 Dec 2005 20:51:56 -0500 Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <200512061431.18180.tweeks@rackspace.com> References: <200512061431.18180.tweeks@rackspace.com> Message-ID: <35d616040512061751x124ec6b9p44fc7f29402e95d4@mail.gmail.com> I also have a A8V MB. The manual explains that the mother board uses some memory address space so you can't use all 4 GB. On 12/6/05, tweeks wrote: > > Hey Guys.. > > Disclaimer: > I've googled the list... and didn't see anything... > > I've got a guy running an Asus A8V Deluxe w/Ath64 3000+ and 4 sticks of > 1GB > and his setup (with either RH-EL3 64bit or RH-EL4 64bit) seems to be > exhibiting the classic x86/32bit problem of not being able to map the > memory > around the 512MB PCI memory/chipset mapping. I'm not sure about the 2.4 > kernel, but I would think that the 2.6 64bit kernel (or the motherboard) > could map that memory high (beyond the 32 bit range) to access all 4GB, > but > nothing stands out on how I should go about doing this (BIOS seems void of > any such "settings", and I know of no OS/kernel (sysctl) parameters that > can > do this). > > And no.. Running PAE is NOT desireable.. This is a 64bit system.. why > can't I > either map-high, or around the PCI hardware limitation?! (and NO.. I don't > want to add another 1GB to "get around it"). This guys wants 4GB.. Not > pay > for 5 get 4.5. > > Am I missing something here... or is this system simply NOT going to be > able > to get to that last PCI mapped 512MB? > > Anyone? > > Tweeks > > -- > amd64-list mailing list > amd64-list at redhat.com > https://www.redhat.com/mailman/listinfo/amd64-list > -- My new primary mail account has changed to timothy.leblanc at gmail.com Please make any necessary changes to your address book. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arjan at fenrus.demon.nl Wed Dec 7 06:32:05 2005 From: arjan at fenrus.demon.nl (Arjan van de Ven) Date: Wed, 07 Dec 2005 07:32:05 +0100 Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <200512061431.18180.tweeks@rackspace.com> References: <200512061431.18180.tweeks@rackspace.com> Message-ID: <1133937135.2869.5.camel@laptopd505.fenrus.org> On Tue, 2005-12-06 at 14:31 -0600, tweeks wrote: > Hey Guys.. > > Disclaimer: > I've googled the list... and didn't see anything... > > I've got a guy running an Asus A8V Deluxe w/Ath64 3000+ and 4 sticks of 1GB > and his setup (with either RH-EL3 64bit or RH-EL4 64bit) seems to be > exhibiting the classic x86/32bit problem of not being able to map the memory > around the 512MB PCI memory/chipset mapping. that's not a kernel issue; the chipset has to do this! From tweeks at rackspace.com Tue Dec 6 22:02:11 2005 From: tweeks at rackspace.com (tweeks) Date: Tue, 6 Dec 2005 16:02:11 -0600 Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <20051206212959.56783.qmail@web34106.mail.mud.yahoo.com> References: <20051206212959.56783.qmail@web34106.mail.mud.yahoo.com> Message-ID: <200512061602.11546.tweeks@rackspace.com> From vladimir at acm.org Wed Dec 7 16:56:51 2005 From: vladimir at acm.org (Vladimir G. Ivanovic) Date: Wed, 07 Dec 2005 08:56:51 -0800 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: Your message of "Sun, 04 Dec 2005 20:05:20 EST." <200512042005.20597.loony@loonybin.org> Message-ID: <200512071656.jB7Gupoi003448@bach.leonora.org> >>>>> "pa" == Peter Arremann writes: pa> pa> On Sunday 04 December 2005 19:28, you wrote: >> Now Peter, do you want to tell the group about our little "history"? Or >> do you like to just challenge me on every list we're on, and the >> countless times I've sent you detailed information on various things >> with regards to how the AMD64 technology works over on the CentOS list >> (e.g., 48-bit PAE mode -- which you were so ignorant of you looked like >> an ass last time we went around ;-)? pa> Bryan, pa> pa> I was hoping your mom had tought you basic manners. My impression is that Bryan has been quite polite in the face of some unnecessarily pointed comments. And, BTW, ad hominem attacks like the manners comment about are always unappreciated. --- Vladimir -- Vladimir G. Ivanovic Palo Alto, CA 94306 +1 650 678 8014 From maurice at harddata.com Wed Dec 7 19:48:38 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Wed, 07 Dec 2005 12:48:38 -0700 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: References: Message-ID: <43973C96.9050301@harddata.com> Mark Hahn wrote: >>As does my sidebar on "Near-line Enterprise Disk" here: >> http://www.samag.com/documents/sam0509a/0509a_s1.htm >> >> > >can you cite your source for the statement that MTBF is .4 Mhours? >I know that was a common figure for desktop drives from 8-10 years ago, >but I do also remember seeing a number of ATA disks that claimed MTBF's >around 1Mhour. > >I'm guessing you just figured that > 1.4 Mhour * (10*5)/(24*7) = .417 Mhour > >this is not an unreasonable guess, based on the assumption that a vendor >would try to carefully minimize the MTBF given the current 3-5 year >warranty periods, which are the same as "enterprise" products, but assuming >a different duty cycle. I expect return rates are higher than this >calculation would lead you to expect though, and therefore vendors would >either try to get away with lower warranty periods or actually bump up >the non-enterprise MTBF to reduce RMA costs. > >but it would be nice to have even a hint of actual, factual MTBF's. > > > Just to wade in here, the new S-ATA RAID drives from WD ( 2500Yd and 4000YR) have claimed MTBF of 1.2 million hours. Seagate also have very long MTBFs on their NL 35 models -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurice at harddata.com Wed Dec 7 19:53:30 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Wed, 07 Dec 2005 12:53:30 -0700 Subject: Opteron Vs. Athlon X2 In-Reply-To: <4394B3C6.5050009@speakeasy.net> References: <4394B3C6.5050009@speakeasy.net> Message-ID: <43973DBA.3040907@harddata.com> Robert L Cochran wrote: > I'm Christmas shopping and in need of advice. > > I'm currently using a machine that has an Athlon 64 3500+ (single > core) processor with an MSI K8N Neo2 Platinum motherboard. I'm using 2 > Gb of Adata memory which is actually running on 2T timings. The > motherboard was flashed with the v1.9 BIOS which upgrades the board to > processor E6 CPU (I'm not familiar with the "E6" terminology.) > > Now I'm on the Monarch Computer website looking at Athlon X2 and > Opteron processors, and I'm wondering which one would be a really good > step up for me for the next 2 years -- preferably without having to > get a new motherboard. > > Does an Opteron dual-core give me any advantage over the Athlon X2 > besides a bigger L2 cache? > ECC RAM capability, assuming you have a board and BIOS that also support that. > Can I run Fedora Core 4 (and later on, Fedora Core 5) on the processor? > I think you can safely assume FC "anything" will run on AMD CPUs. > Can I take an Opteron processor and drop it into my existing MSI > motherboard, or should I expect to buy a new motherboard also? If so, > what motherboard? > Doubtful > What would be better for a developer -- the Athlon X2 or the Opteron? Depends on your needs. Opterons are meant for Workstations/small servers, hence the ECC support. Athlon64 are meant for commodity desktops Sempron64 are meant for econo desktops If you are interested in overclocking word is that the Opteron S939 chips seem more capable for overclocking. -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 From maurice at harddata.com Wed Dec 7 19:57:59 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Wed, 07 Dec 2005 12:57:59 -0700 Subject: Personal insult battles In-Reply-To: <1133759176.5010.266.camel@bert64.oviedo.smithconcepts.com> References: <200512042005.20597.loony@loonybin.org> <1133749433.5010.257.camel@bert64.oviedo.smithconcepts.com> <200512042212.02043.loony@loonybin.org> <1133759176.5010.266.camel@bert64.oviedo.smithconcepts.com> Message-ID: <43973EC7.9020602@harddata.com> BRYAN, PETER, PERHAPS THIS DISCUSSION SHOULD BE TAKEN OFF LIST?? Yes, I AM SHOUTING! I KNOW!! -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 From bill at cse.ucdavis.edu Wed Dec 7 20:12:39 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Wed, 7 Dec 2005 12:12:39 -0800 Subject: Opteron Vs. Athlon X2 In-Reply-To: <43973DBA.3040907@harddata.com> References: <4394B3C6.5050009@speakeasy.net> <43973DBA.3040907@harddata.com> Message-ID: <20051207201238.GD12271@cse.ucdavis.edu> On Wed, Dec 07, 2005 at 12:53:30PM -0700, Maurice Hilarius wrote: > > Does an Opteron dual-core give me any advantage over the Athlon X2 > > besides a bigger L2 cache? Some of the X2s have 1/2 the cache of the opteron, some have the same (1MB per core). > ECC RAM capability, assuming you have a board and BIOS that also support > that. The Athlon 64 X2 supports ECC, at least the "AMD Athlon 64 X2 Dual Core Product Sheet" (pub # 33425) claims it does. Specifically double bit detect and single bit correct. > > Can I take an Opteron processor and drop it into my existing MSI > > motherboard, or should I expect to buy a new motherboard also? If so, > > what motherboard? > > > Doubtful Dunno. Keep in mind most opterons are 940 pin, only the newer 1xx's are 939 pins. I've heard that sun was part of the influence. The differences between 939 pin opterons and amd64's are from what I can tell only clock speed and cache. > Depends on your needs. > Opterons are meant for Workstations/small servers, hence the ECC support. Careful here. S940 opterons require ECC registered memory. S939s do not. In general you are much more likely to get full ECC (motherboard, cpu and bios) compatibility with an opteron then you are in a amd64. But that isn't the CPUs fault. S939 opterons and S939 amd64's use non-registered memory. They support ECC, but many motherboards seem not to support it. On the ones I've checked the manual nor bios mentioned ECC. Additionally in other manuals I've seen that ECC "works" but doesn't provide ECC functionality. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Wed Dec 7 20:31:06 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 12:31:06 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: <200512071656.jB7Gupoi003448@bach.leonora.org> Message-ID: <20051207203106.29753.qmail@web34102.mail.mud.yahoo.com> "Vladimir G. Ivanovic" wrote: > My impression is that Bryan has been quite polite in the > face of some unnecessarily pointed comments. > And, BTW, ad hominem attacks like the manners comment about > are always unappreciated. I don't have a problem with anyone's demeanor here. So don't worry about that. I provide a _lot_ of technical information to various groups, and really try to "do my homework." That includes providing references as best as I can, when I can find them, and trying to explain why you may never find exactly what you want with some details. I don't say things lightly, and if I'm saying something, it's from experience. Yours _may_ vary, and I'll willing to accept that -- especially if the context/application is quite different (e.g., web servers using software RAID v. file servers using hardware RAID). I'm more than ready and willing to be proven wrong. The _only_ problem I have is when I get berrated for providing information, but no one else provides anything that contradicts. We're all here to help each other as we're all looking for information. I don't know why Peter insists on picking apart anything I say, but for people like him, I have my standard disclaimer -- please find out so we _all_ can benefit. You won't see my asking for anyone else to "prove" their viewpoint. I leave it up to others to challenge themselves, just as I challenge myself. So ... please don't ask me for "proof" when you don't have any either -- let alone continue to not accept the references I provide, then nit-pick apart little details outside the context I give them. E.g., vendors are marketing these commodity disk capacities that have been tested to exceptional tolerances as "enterprise," but they are very much not enterprise capacities and quality. I've shown specification sheets of drives with commodity capacities (100-400GB) at 1.0M hours MTBF like the Seagate NL35 and Western Digital Caviar RE, which match the exact capacity/performance specifications as the Seagate Barracuda 7200.8 and Western Digital Caviar SE, respectively. Anyone can do further research on the enterprise capacities (18-146GB) and find 1.4M hours MTBF. But what you won't find is OEM/desktop grade MTBF anymore, and some vendors like Hitachi have cut out MTBF ratings altogether. But if the "enterprise" versions of commodity capacity (100-400GB) drives are only 1.0M hours MTBF, you can be rest assured the OEM/desktop grades are well below that. Going back to the 50,000 restarts and 8 hours/day, maybe 14 hours/day (0.7M hours MTBF), maximum. Lastly, getting away from end-users, but more to "service level agreements" (SLA) with integrators (hey, most of here support RHEL and sell some with SLAs, right? ;-), I know _no_ HD manufacturer/vendor that will warranty such OEM/desktop grade drives when they know they will go in a 24x7 system. That's the reason for these 1.0M hour MTBF "enterprise" versions like the Seagate NL35 and WD Caviar RE. Please take this information as I have provided it. If you wish not to, again, I have no problem leaving you with the belief that I acquire everything from my rectum. I know it takes years to trust what comes from someone, and I'm willing to spend years earning that trust. ;-> -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From hahn at physics.mcmaster.ca Wed Dec 7 20:52:14 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 7 Dec 2005 15:52:14 -0500 (EST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <43973DBA.3040907@harddata.com> Message-ID: > > Does an Opteron dual-core give me any advantage over the Athlon X2 > > besides a bigger L2 cache? > > > ECC RAM capability, assuming you have a board and BIOS that also support > that. the ath64 X2 datasheet says that it supports SECDED. > > Can I take an Opteron processor and drop it into my existing MSI > > motherboard, or should I expect to buy a new motherboard also? If so, > > what motherboard? > > Doubtful I would expect any s939 chip to work in a lot of recent/updated/etc s939 boards, including the s939 1xx opterons. in fact, I'm not sure what distinguishes an athlon64 4000+ (2.4 GHz, 1M cache) from an opteron 150. perhaps not surprisingly, their prices are quite similar ($330 or so on pricewatch). interestingly, the ath64 x2 version (4800+, 2.4, 2x1M) is around $790 - a 20% premium. if your choice is 2xSC vs 1xDC, (and you don't mind less memory bandwidth), you can probably save more than the $130 premium because single-socket boards are cheaper... regards, mark hahn. From jmorris at beau.org Wed Dec 7 21:15:16 2005 From: jmorris at beau.org (John Morris) Date: Wed, 07 Dec 2005 15:15:16 -0600 Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <200512061431.18180.tweeks@rackspace.com> References: <200512061431.18180.tweeks@rackspace.com> Message-ID: <1133990116.2970.11.camel@mjolnir> On Tue, 2005-12-06 at 14:31, tweeks wrote: > I've got a guy running an Asus A8V Deluxe w/Ath64 3000+ and 4 sticks of 1GB > and his setup (with either RH-EL3 64bit or RH-EL4 64bit) seems to be > exhibiting the classic x86/32bit problem of not being able to map the memory > around the 512MB PCI memory/chipset mapping. I'm not sure about the 2.4 > kernel, but I would think that the 2.6 64bit kernel (or the motherboard) > could map that memory high (beyond the 32 bit range) to access all 4GB, but > nothing stands out on how I should go about doing this (BIOS seems void of > any such "settings", and I know of no OS/kernel (sysctl) parameters that can > do this). Look in your manual, my version 3 copy says, on page 4-17, that it has an option "DRAM Over 4G Remapping" that defaults to disabled. My machine only has a pair of 1GB sticks in it currently so I can't test it. Would like to know if I'd get 4GB instead of 3.5GB so could ya report back? -- John M. http://www.beau.org/~jmorris This post is 100% M$Free! Geekcode 3.1:GCS C+++ UL++++$ P++ L+++ W++ w--- Y++ b++ 5+++ R tv- e* r From hahn at physics.mcmaster.ca Wed Dec 7 21:48:25 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 7 Dec 2005 16:48:25 -0500 (EST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051207201238.GD12271@cse.ucdavis.edu> Message-ID: > The differences between 939 pin opterons and amd64's are from what > I can tell only clock speed and cache. there are some which apparently have the same pinout, clock, cache. I didn't check to see whether they have the same rev, though (sse3?) > S939 opterons and S939 amd64's use non-registered memory. They support > ECC, but many motherboards seem not to support it. On the ones I've > checked the manual nor bios mentioned ECC. Additionally in other manuals > I've seen that ECC "works" but doesn't provide ECC functionality. the tyan S2866 appears to be a fairly normal s939 which http://www.tyan.com/products/html/tomcatk8esli.html the manual definitely presents ECC options and even shows 'enabled' as the reccomended/default. this board looks pretty nice - anyone have experience with it? the S2866G3NR model even has integrated video, and avoids the 3-stack of audio connectors that prevent some boards fitting 1U. even has IPMI support, though some people might shed a tears about multiple nics on a 32x33 PCI and on weekends, you can use NV SLI on it ;) actually, now that I look, the supermicro H8SSL-i also definitely supports ECC (and might have a slightly nicer LAN config.) for sheer obsessive completeness, I also checked the manual for the asus a8n5x - unquestionably a desktop-oriented board. it also appears to have a working ECC config option, though it defaults off. regards, mark hahn. From bill at cse.ucdavis.edu Wed Dec 7 22:08:09 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Wed, 7 Dec 2005 14:08:09 -0800 Subject: Opteron Vs. Athlon X2 In-Reply-To: References: <20051207201238.GD12271@cse.ucdavis.edu> Message-ID: <20051207220809.GF12271@cse.ucdavis.edu> My desktop at home is a MSI Neo4, I think it's the platinum. 2 4 channel SATA controllers. 2 GigE 16x PCI-e, 1x PCI-e 10 USB 2.0 2 IEEE 1394 ports At: http://www.xyzcomputing.com/index.php?option=content&task=view&id=492 It says: ? Supports dual channel DDR 266/333/400, using four 184-pin DDR DIMMs. ? Supports a maximum memory size up to 4GB without ECC. ? Supports 2.5v DDR SDRAM DIMM. I downloaded the manual and acrobat says not a single mention of ECC. So what does "without" mean? That only 4GB is not allowed with ECC? Or that ECC doesn't work at all. Or that it works but you don't actually get the error correction? I've seen wording in amd64 manuals (of the older generation) that I read to mean that the dimms would work (the CPU would be able to read/write to memory) but not provide ECC. Things seemed really murky for the athlon, athlon XP, and earlier AMD64 motherboards I looked at. I was glad that the opteron was rather clear on this. From Mark's post it looks like several of the motherboard vendors are clarifying this for AMD64 boards as well. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Wed Dec 7 22:28:18 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 14:28:18 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- JMR SATAStor 6x2.5" in 1x5.25" array In-Reply-To: <43973C96.9050301@harddata.com> Message-ID: <20051207222818.92460.qmail@web34101.mail.mud.yahoo.com> Maurice Hilarius wrote: > Just to wade in here, the new S-ATA RAID drives from WD ( > 2500Yd and 4000YR) have claimed MTBF of 1.2 million hours. Yes, the Caviar RE2 series. I mentioned exactly that in my previous post. The RE2 are 1.2M, the RE are 1.0M. ;-> > Seagate also have very long MTBFs on their NL 35 models Also what I've been saying repeatedly. ;-> If these are the "enterprise" versions of their "commodity capacity" drives, and offering 1.0+M, then the standard OEM/desktop rated models of the exact same capacity/technical specifications that come off the same line aren't over 1.0M. That has been my continuous point. ;-> [ Again, the current NL35 are enterprise-rated Barracuda 7200.8, the Caviar RE are enterprise-rated Caviar SE. The RE2 are the 16MB Caviar SE versions. ] -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From hahn at physics.mcmaster.ca Wed Dec 7 22:34:40 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 7 Dec 2005 17:34:40 -0500 (EST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: <20051207203106.29753.qmail@web34102.mail.mud.yahoo.com> Message-ID: > I've shown specification sheets of drives with commodity > capacities (100-400GB) at 1.0M hours MTBF like the Seagate > NL35 and Western Digital Caviar RE, which match the exact > capacity/performance specifications as the Seagate Barracuda > 7200.8 and Western Digital Caviar SE, respectively. Anyone arrg! this is the sort of thing that gets my goat - you claim that if the capacity and performance numbers match, then the only difference between the drives must be some sort of "enterprise" testing. but without hard facts from the vendor, this is just speculation on your part. sure, it could make sense, and might even make Occam happy. but not QED, still speculation. > can do further research on the enterprise capacities > (18-146GB) and find 1.4M hours MTBF. translation: some disks have MTBF of X and capacity of C. this other disk has capacity C, therefore it has MTBF of X. > altogether. But if the "enterprise" versions of commodity > capacity (100-400GB) drives are only 1.0M hours MTBF, you can > be rest assured the OEM/desktop grades are well below that. why does "rest assured" always mean "I say"? > Going back to the 50,000 restarts and 8 hours/day, maybe 14 > hours/day (0.7M hours MTBF), maximum. 8, 14, who's counting? do you have any real backup for the claim (afaikt) that reliability is strictly a function of power cycles? what if you put a "desktop" drive into a nice rack-mounted, UPS-protected, machineroom at a constant 20C with great airflow? I've got a bunch of servers (netboot but swap and /tmp on local disks) that have been in use for ~3 years and smartctl tells me the disks have seen ~100 power cycles. > support RHEL and sell some with SLAs, right? ;-), I know _no_ > HD manufacturer/vendor that will warranty such OEM/desktop > grade drives when they know they will go in a 24x7 system. warranties typically just say something weaselish like "does not cover abuse". how would a vendor refuse warranty replacement because of 24x7? would they just run SMART and divide POH by power cycle count? oh, no! I've abused node2's disk by averaging 289 hours per cycle! (interestingly, current temp is 26C, lifetime min/max is 13/36.) > belief that I acquire everything from my rectum. I know it > takes years to trust what comes from someone, and I'm willing > to spend years earning that trust. ;-> so far, I'm not sure you've done anything to earn anyone's trust on, for instance, the 50,000 * 8 hours => 400Khour MTBF assertion. repeating it for years won't help... From b.j.smith at ieee.org Wed Dec 7 22:40:03 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 14:40:03 -0800 (PST) Subject: Personal insult battles In-Reply-To: <43973EC7.9020602@harddata.com> Message-ID: <20051207224003.8868.qmail@web34110.mail.mud.yahoo.com> Maurice Hilarius wrote: > BRYAN, PETER, PERHAPS THIS DISCUSSION SHOULD BE TAKEN OFF > LIST?? > Yes, I AM SHOUTING! I KNOW!! ??? -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From cochranb at speakeasy.net Wed Dec 7 22:47:10 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Wed, 07 Dec 2005 17:47:10 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: <200512060036.59505.loony@loonybin.org> References: <4394B3C6.5050009@speakeasy.net> <200512052341.14169.loony@loonybin.org> <43951FEB.9050008@speakeasy.net> <200512060036.59505.loony@loonybin.org> Message-ID: <4397666E.5030900@speakeasy.net> Thanks everyone for your thoughts. I've told my wife (with help from a vendor's web-based wish list) that I would like the Athlon 64 X2 4400 suggested by Peter. If that shows up under the tree, I'll go ahead and get the Asrock board. I will see what develops. I think I might look for some sort of integrated SD/Compact Flash/MMC/Memory stick card reader that can slide into a drive bay and probably a dual layer DVD+RW drive while I'm at it. Meanwhile, I want to thank everyone for the helpful comments. And yes, I really would like to cut down the compile times. Bob Cochran Peter Arremann wrote: >On Tuesday 06 December 2005 00:21, Robert L Cochran wrote: > > >>Thanks Peter, Bryan, and Bill for your thoughts. >> >>I would like to keep to a budget of about USD $600-700 for a CPU >>upgrade. I want to both develop and use open source software, which >>means a lot of code-compile-test cycles. I want the compiles to finish >>quickly. For example, PHP 6.0 (from snaps.php.net) takes about 4-5 >>minutes to compile on my single core Athlon 64 3500+, and I'd like to >>cut the compile time in half. I also want to do web development with PHP >>and databases. I want to be able to keep up with the current CPUs and >>get exposure to them. >> >> >you're unhappy with 5 minutes? Now I'm suddenly happy with the users at work - >they are happy with the 30 or more minutes it takes for our apps to build :-) > > > > >>With these goals in mind what hardware will give me what I want and fit >>inside that $700? What do you think will work for me? I want to make use >>of my existing power supply, memory, and drives as much as possible. If >>I have to replace my motherboard, I'll consider it. >> >>So -- and I say this with humor! -- what can I ask my wife to give me >>for Christmas without generating heavy expense but still be good enough >>for me, a computer programmer who does a lot of development? >> >> >For me, reuse of components is the most important thing usually. >The situation you're personally would go the following route - I know its not >the greatest technical nor performance wise - but the price/performance for >this upgrade can't be beat (it is also what I'm running, so I can tell you >its rock solid) > >Start with the 939Dual board from Asrock (lowend asus). Its based around a SIS >chipset and has the huge advantage that you can not only keep your memory but >also the graphics card. Almost all boards that support dual core are pci >express. This board also has a 16x PCI-Express slot, so you can later upgrade >to such a card (or like me, run a dual head setup with a 8xAGP and a >16xPCI-Express card). In addition to that, the board is dirt cheap - less >than $70. >http://www.asrock.com/support/CPU_Support/show.asp?Model=939Dual-SATA2 >It runs Centos4U1, FC3,4,5T1 without any issue. Centos 3.5 and Solaris 10 both >didn't like the network card - but for a developers workstation you can just >buy any old pci ethernet card without much thinking about it. > >Then take a Athlon 64 - X2 4400. Its less than $500 boxed. The next step up >would be the X4600 but with only 200Mhz more and half the cache I doubt it >will be worth the extra $130 you pay for it. > >That CPU/board combo should give you a nice performance boost since you got >the same clock per core but dual core and twice the cache per core... You >won't see half the compile time though because one of the slowest thing these >days is linking - and that's always done single threaded. > >Peter. > > > From PerSteinar.Iversen at hio.no Wed Dec 7 22:53:57 2005 From: PerSteinar.Iversen at hio.no (Per Steinar Iversen) Date: Wed, 7 Dec 2005 23:53:57 +0100 (CET) Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <1133990116.2970.11.camel@mjolnir> References: <200512061431.18180.tweeks@rackspace.com> <1133990116.2970.11.camel@mjolnir> Message-ID: On Wed, 7 Dec 2005, John Morris wrote: > On Tue, 2005-12-06 at 14:31, tweeks wrote: > >> I've got a guy running an Asus A8V Deluxe w/Ath64 3000+ and 4 sticks of 1GB >> and his setup (with either RH-EL3 64bit or RH-EL4 64bit) seems to be >> exhibiting the classic x86/32bit problem of not being able to map the memory >> around the 512MB PCI memory/chipset mapping. I'm not sure about the 2.4 >> kernel, but I would think that the 2.6 64bit kernel (or the motherboard) >> could map that memory high (beyond the 32 bit range) to access all 4GB, but >> nothing stands out on how I should go about doing this (BIOS seems void of >> any such "settings", and I know of no OS/kernel (sysctl) parameters that can >> do this). > > Look in your manual, my version 3 copy says, on page 4-17, that it has > an option "DRAM Over 4G Remapping" that defaults to disabled. My > machine only has a pair of 1GB sticks in it currently so I can't test > it. Would like to know if I'd get 4GB instead of 3.5GB so could ya > report back? I got one of these with 4GB RAM. If the remapping option is turned on nearly everything works, apart from the ethernet. Depending on the kernel the network either just does not work or the kernel emits large numbers of errors on the console, complaining about some IRQ problem (it is hard to read the messages as they scroll very rapidly). The network card on my machine is: 00:0a.0 Ethernet controller: Marvell Technology Group Ltd. 88E8001 Gigabit Ethernet Controller (rev 13) It is not clear to me if this is a driver problem or intrinsic to this design. When running with less memory available the machine is very pleasant and stable. -psi From hahn at physics.mcmaster.ca Wed Dec 7 22:59:54 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 7 Dec 2005 17:59:54 -0500 (EST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051207220809.GF12271@cse.ucdavis.edu> Message-ID: > So what does "without" mean? That only 4GB is not allowed with ECC? > Or that ECC doesn't work at all. Or that it works but you don't > actually get the error correction? my understanding is that unbuffered ECC dimms simply provide 72b wide storage rather than 64. if the bios doesn't configure the k8's memory controller to do the ECC dance, the bits just evaporate (or maybe they all get pulled high/low.) putting 64/72 in the dimms SPD would make sense. > I've seen wording in amd64 manuals (of the older generation) that > I read to mean that the dimms would work (the CPU would be able > to read/write to memory) but not provide ECC. I suspect that ECC works in all K8's, but that the bios leaves ECC unconfigured if it sees, say, a sempron CPUID. even the s754 pinout DOES contain the memcheck[7:0] bits for ECC, though. AMD might leave those pins unconnected, but I suspect they wouldn't bother. > Things seemed really murky for the athlon, athlon XP, and earlier > AMD64 motherboards I looked at. I was glad that the opteron was the big change is onboard memory controller, and so the question is where the different model become distinct - die, packaging, cpuid? if I were AMD, I'd just flash/etch the CPUID right before shipping, and expect the bios to respect my product positioning... regards, mark hahn. From b.j.smith at ieee.org Wed Dec 7 23:15:00 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 15:15:00 -0800 (PST) Subject: Asus A8V Deluxe Can't see full 4GB In-Reply-To: <1133990116.2970.11.camel@mjolnir> Message-ID: <20051207231500.35172.qmail@web34109.mail.mud.yahoo.com> John Morris wrote: > Look in your manual, my version 3 copy says, on page 4-17, > that it has an option "DRAM Over 4G Remapping" that defaults > to disabled. Not having an A8V, that's _exactly_ what I was talking about with the remapping. BTW, I wouldn't be surprised if the reason why it is disable is because the NT kernel (non 2K[3] Advanced Server) takes a real issue with it. > My machine only has a pair of 1GB sticks in it currently so I > can't test it. Would like to know if I'd get 4GB instead of > 3.5GB so could ya report back? He should. The last 512MiB will be mapped at 4-4.5GiB, at least with a x86-64 kernel. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From b.j.smith at ieee.org Thu Dec 8 00:01:37 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 16:01:37 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: Message-ID: <20051208000138.44732.qmail@web34114.mail.mud.yahoo.com> Mark Hahn wrote: > arrg! this is the sort of thing that gets my goat - you > claim that if the capacity and performance numbers match, > then the only difference between the drives must be some > sort of "enterprise" testing. but without hard facts from > the vendor, this is just speculation on your part. sure, > it could make sense, and might even make Occam happy. but > not QED, still speculation. It first started almost 3 years ago I read an article in StorageReview (IIRC) about the coming "near-line" enterprise disk. They interviewed the [former] Quantum product line manager and talked about the process in which they test their commodity capacities for tolerances. They then set aside the units for different warranty and packaging. At first, they just did it for integrators and VARs -- non-desktop OEM uses. He was talking about the "critical mass" of this, as more and more vendors were not happy with what the enterprise capacities were delivering, and that was driving the new model. Since then I've talked to product managers at Seagate and Hitachi. They both told me how all of the products roll of the same mechanics line, except for the traditional enterprise capacities, which are clearly fabbed with far greater precision and better materials. In talking to the Seagate rep about the new 60C tolerance process, he explicitly told me that their forthcoming (at the time) NL35 line was rolling off the same line as those destined for the 7200.7 and, forthcoming, 7200.8. Hitachi has made similar statements with regards to their product lines. Most people don't know that Hitachi fabs many of Western Digital's drives, which was previously done by IBM before they sold to Hitachi GST. I'm not sure who Western Digital is using for some of their newer models though. > translation: some disks have MTBF of X and capacity of C. > this other disk has capacity C, therefore it has MTBF of X. My point _continues_ to be that if an "enterprise rated" drive of commodity 100-400GB capacity is 1.0M hours MTBF, then there's _no_way_ a standard OEM/desktop version could be even 1.0M hours MTBF. That's _all_ my point was. It is very safe to assume it's still 0.4M hours (50,000 restarts @ 8 hours/day) MTBF and remains unchanged. In the "best case" you could even say maybe 0.6-0.7M hours MTBF. Do *NOT* expect >1.0M hours MTBF with these drives. Lastly, "real world" experience in just the last year at 2 major Fortune 100 companies with thousands of desktops has shown that OEM/desktop products have failure rates _much_, _much_ higher! ;-> > why does "rest assured" always mean "I say"? If you have anything to contribute other than to be negative and dismiss the references and logical presentation of all this, then please do. I'm sorry I can't give you the absolute exact answer you want, but I have presented technical information with exacting vendor documents that do as best as I can, and everything else is logical presentation of those facts. If you are wanting me to point to some vendor's documentation that says, "yes, our drives suck -- we're not even making our MTBFs on OEM/desktop units anymore, so we don't even bother publishing them" -- sorry, not going to obviously happen. ;-> > 8, 14, who's counting? do you have any real backup for the > claim (afaikt) that reliability is strictly a function of > power cycles? After IBM put 5 platters in its infamous 75GXP (something Hitachi recently did in its new 500GB version -- stupid IMHO), there was a number of storage articles on all this. IBM went so far as to putting a "hard" warranty limitation on usage over 14x5. Now I seriously doubt that IBM could enforce that on end-users, but it surely did on OEMs -- especially integrators. That's really the "apex" of all this, almost economics at work. At 75GB, commodity capacity clearly overtook enterprise significantly. Furthermore, with the massive sales of more and more desktops than total in servers, and the emergence of ATA as an option more and more took -- even the low-cost, low-margin of commodity disk offered a volume that was too great to ignore. So, again, as more and more server OEMs and integrators looked to commodity capacities for 24x7 network-managed use, the manufacturers started offering the options. You can't get the capacities you want in a truly enterprise-quality fabricated disk -- the mechanics are far more exacting, the materials far more costly. > what if you put a "desktop" drive into a nice rack-mounted, > UPS-protected, machineroom at a constant 20C with great > airflow? I've got a bunch of servers (netboot but swap and > /tmp on local disks) that have been in use for ~3 years and > smartctl tells me the disks have seen ~100 power cycles. It's more than just heat. The new materials now raise the ambient tolerance of commodity disks to 60C from the traditional 40C -- far more in-line with traditional 55C enterprise tolerances. Vibration of the higher capacity platters is a serious issue. Something that vendors have to be more exacting on when they deal with 10+Krpm disks, but not so much on commodity 7200rpm. And then there's the materials -- from bearings to lube choice. From my understanding, that's the big cost factor there. One that really makes the cost prohibitive for those looking for the GB/$, whereas those who want it are willing to pay more ($) for less (GB). > warranties typically just say something weaselish like > "does not cover abuse". how would a vendor refuse warranty > replacement because of 24x7? would they just run SMART and > divide POH by power cycle count? How many times do I have to say this. I agreed that manufacturers/vendors can't do much about end-users. If you want to go with the Western Digital Caviar SE instead of the RE, or the Seagate Barracuda 7200.x instead of the Seagate NL35, then go for it! If the same "enterprise" versions of these capacities are rated for 1.0M hours MTBF, then I don't know what you expect out of the "regular" versions. *BUT* if a OEM or integrator is selling a 24x7 server or 24x7 network-managed product with its drives, then they can hold them to such. They can _void_ warranty because of the application which the drive is not warrantied for -- quickly citeable in the OEM/integrators application specificatons. That's the issue, and the reason for products such as the Seagate NL35, Western Digital Caviar RE, etc... > oh, no! I've abused node2's disk by averaging 289 hours > per cycle! (interestingly, current temp is 26C, lifetime > min/max is 13/36.) And that's your choice. I'm just stating what you want, how many hours MTBF you can expect to get out of the drives you are using. > so far, I'm not sure you've done anything to earn anyone's > trust on, for instance, the 50,000 * 8 hours => 400Khour MTBF > assertion. repeating it for years won't help... I sincerely hope you can take _all_ of the technical information and insight I have provided at my time and literally years of research and interviews with industry employees and appreciate it in the context it was presented. Otherwise, I don't know what to tell you. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From loony at loonybin.org Thu Dec 8 00:26:44 2005 From: loony at loonybin.org (Peter Arremann) Date: Wed, 7 Dec 2005 19:26:44 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: <20051208000138.44732.qmail@web34114.mail.mud.yahoo.com> References: <20051208000138.44732.qmail@web34114.mail.mud.yahoo.com> Message-ID: <200512071926.44685.loony@loonybin.org> On Wednesday 07 December 2005 19:01, Bryan J. Smith wrote: > Most people don't know that Hitachi fabs many of Western > Digital's drives, which was previously done by IBM before > they sold to Hitachi GST. I'm not sure who Western Digital > is using for some of their newer models though. Yeah - HGST makes the WD drives except possibly the Raptors. There are a few reports that say the Raptor line is made by Seagate but no one seems to know for sure. > After IBM put 5 platters in its infamous 75GXP (something > Hitachi recently did in its new 500GB version -- stupid > IMHO), there was a number of storage articles on all this. > IBM went so far as to putting a "hard" warranty limitation on > usage over 14x5. Now I seriously doubt that IBM could > enforce that on end-users, but it surely did on OEMs -- > especially integrators. There is actually a class action suite out there against IBM over the 75GB disks. Most people say it was because of too little space between the platters and therefore the little tollerance to shock/heat... Do you know if the 500GB Hitachi drives have the same reliability issues? OK - I've one more question... Does the reliability of a single disk really matter much in your environment anymore? We run a few hundret servers with anything from ancient 1GB SCSI disks to newer 400GB sata drives. Raid in one form or another. There were several instances where we lost data. One time we had a dead director and when it died the EMC somehow ended up writing bad data to the drive. Another time, we lost data because of a backplane in a E450 - it fried the circuit boards on several drives at once. Just recently we lost data on several SUN 3510. Seagate OEM fibre 15K rpm disks. I can however not remember a single time over the past several years where we lost data because of enough disks in a raid going bad at the same time. If anything happened, the hot spare always kicked in and the rebuilt went fine and everyone was happy. You throw out the bad disk and pop in a new one. Peter. From b.j.smith at ieee.org Thu Dec 8 00:45:37 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 16:45:37 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: <200512071926.44685.loony@loonybin.org> Message-ID: <20051208004537.79873.qmail@web34108.mail.mud.yahoo.com> Peter Arremann wrote: > Yeah - HGST makes the WD drives except possibly the > Raptors. Actually, I've all but first-hand confirmed the WD Raptors roll off the same line as the Hitachi 10K Ultrastars. > There are a few reports that say the Raptor line is made by > Seagate but no one seems to know for sure. If this is true, I'd love to confirm it. WD shops around as they really don't have any fab production of their own anymore. > There is actually a class action suite out there against > IBM over the 75GB disks. Most people say it was because of > too little space between the platters and therefore the > little tollerance to shock/heat... Do you know if the 500GB > Hitachi drives have the same reliability issues? I don't know, I just know _no_one_ has put 5 platters in at 7200rpm since ... until now. With the increase in thermal tolerances to 60C ambient, they might be able to get away with it. Seagate uses 4x 133GB platters in their new 500GB 7200.9. > OK - I've one more question... Does the reliability of a > single disk really matter much in your environment anymore? > We run a few hundret servers with anything from ancient 1GB > SCSI disks to newer 400GB sata drives. Raid in one > form or another. There were several instances where we lost > data. One time we had a dead director and when it died the > EMC somehow ended up writing bad data to the drive. One of the key issues with hardware RAID solutions is keeping the firmware, kernel driver and user monitoring software in-sync. That bit me a long time ago on an ICP-Vortex and I've never repeated it. > Another time, we lost data because of a backplane in a E450 > - it fried the circuit boards on several drives at once. Hmmm, that's a good one. Never heard of a backplane frying. > Just recently we lost data on several SUN 3510. Seagate OEM > fibre 15K rpm disks. I can however not remember a single time > over the past several years where we lost data because of > enough disks in a raid going bad at the same time. I have. I've had a 2nd ATA disk fail on an 8-channel 3Ware card while it was in the middle of a RAID-5 rebuild. These were before the new crop of enterprise, commodity capacity disks. RAID-6 is starting to appear to combat this exact scenario. I've typically preferred RAID-10, in addition to the performance benefits, because it gave me almost 50% chance that a 2nd disk failure wouldn't be the other part of the mirror. > If anything happened, the hot spare always kicked in and the > rebuilt went fine and everyone was happy. You throw out the > bad disk and pop in a new one. As I put it in my engineering statistics class best long ago, "Sigma has a way of catching up with you over time." And damn if I didn't predict several risk scenarios that were ignored at a few clients. ;-> I pay the extra 10-20% for a Seagate NL35 or Western Digital Caviar RE for 24x7 systems that have RAID. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From loony at loonybin.org Thu Dec 8 00:59:36 2005 From: loony at loonybin.org (Peter Arremann) Date: Wed, 7 Dec 2005 19:59:36 -0500 Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: <20051208004537.79873.qmail@web34108.mail.mud.yahoo.com> References: <20051208004537.79873.qmail@web34108.mail.mud.yahoo.com> Message-ID: <200512071959.36478.loony@loonybin.org> On Wednesday 07 December 2005 19:45, Bryan J. Smith wrote: > Peter Arremann wrote: > > Yeah - HGST makes the WD drives except possibly the > > Raptors. > > Actually, I've all but first-hand confirmed the WD Raptors > roll off the same line as the Hitachi 10K Ultrastars. > > > There are a few reports that say the Raptor line is made by > > Seagate but no one seems to know for sure. > > If this is true, I'd love to confirm it. WD shops around as > they really don't have any fab production of their own > anymore. Please let me know if you figure it out. I used to know a bunch of guys at seagate but just like everyone else Seagate has outsources a bit in the last 2 years... > I have. I've had a 2nd ATA disk fail on an 8-channel 3Ware > card while it was in the middle of a RAID-5 rebuild. These > were before the new crop of enterprise, commodity capacity > disks. Did you have a hot spare? We usually run 0+1 or 10 and have a hot spare. > RAID-6 is starting to appear to combat this exact scenario. Yes - but performance on a rebuild is an issue there... > I've typically preferred RAID-10, in addition to the > performance benefits, because it gave me almost 50% chance > that a 2nd disk failure wouldn't be the other part of the > mirror. If I can I go 10 or 0+1 as well. But we use raid 5 where we don't have enough capacity and even there we had no issues. > > If anything happened, the hot spare always kicked in and > > the > > > rebuilt went fine and everyone was happy. You throw out the > > bad disk and pop in a new one. > > As I put it in my engineering statistics class best long ago, > "Sigma has a way of catching up with you over time." :-) > And damn if I didn't predict several risk scenarios that were > ignored at a few clients. ;-> > > I pay the extra 10-20% for a Seagate NL35 or Western Digital > Caviar RE for 24x7 systems that have RAID. *nods* We do that too now. We had problems RMAing regular pata/sata disks that we ran 24x7. Peter. From b.j.smith at ieee.org Thu Dec 8 01:03:43 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 7 Dec 2005 17:03:43 -0800 (PST) Subject: SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information In-Reply-To: <200512071959.36478.loony@loonybin.org> Message-ID: <20051208010343.61257.qmail@web34110.mail.mud.yahoo.com> Peter Arremann wrote: > Did you have a hot spare? We usually run 0+1 or 10 and have > a hot spare. Yes. Unfortunately, it was on an older Escalade 7000 series and RAID-5 rebuilds take much longer (and run much slower when deprecated) than a 3Ware 9000 series. > Yes - but performance on a rebuild is an issue there... Not with a PowerPC or XScale on-board. > If I can I go 10 or 0+1 as well. But we use raid 5 where we > don't have enough capacity and even there we had no issues. The chance is slim, I agree. But as I said before, sigma has a way of catching up to you. > *nods* We do that too now. We had problems RMAing regular > pata/sata disks that we ran 24x7. Hey, I just want to give people options and I try to do my best to explain things -- such as what the more enterprise options are. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From maurice at harddata.com Thu Dec 8 20:26:10 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Thu, 08 Dec 2005 13:26:10 -0700 Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051206035848.GP27230@cse.ucdavis.edu> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> <20051206035848.GP27230@cse.ucdavis.edu> Message-ID: <439896E2.7050103@harddata.com> Bill Broadley wrote: >So I find it hard to believe where any workstation would need PCI-X >unless there's some strange pci-x only card required. 8 SATA ports >are also relatively easy to come by on the motherboard. > > > RAID cards, at least until March 2006 -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 From bill at cse.ucdavis.edu Fri Dec 9 00:04:47 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Thu, 8 Dec 2005 16:04:47 -0800 Subject: Opteron Vs. Athlon X2 In-Reply-To: <439896E2.7050103@harddata.com> References: <4394B3C6.5050009@speakeasy.net> <200512051935.34685.loony@loonybin.org> <1133840534.5114.7.camel@bert64.oviedo.smithconcepts.com> <20051206035848.GP27230@cse.ucdavis.edu> <439896E2.7050103@harddata.com> Message-ID: <20051209000447.GH2948@cse.ucdavis.edu> On Thu, Dec 08, 2005 at 01:26:10PM -0700, Maurice Hilarius wrote: > Bill Broadley wrote: > > >So I find it hard to believe where any workstation would need PCI-X > >unless there's some strange pci-x only card required. 8 SATA ports > >are also relatively easy to come by on the motherboard. > > > > > > > RAID cards, at least until March 2006 I just talked to Maurice offline. There are various pci-e solutions (JBOD, FRAID, and real RAID) that are (and have been) available. I've been building 16 disk servers out of the Areca-1260. The 1260 has: * x8 pci-e bus (2GB/sec + 2GB/sec) * 256MB dimm * Intel RAID 6 engine (forget the part number) * Optional battery backup * Network interface for management * Can manage i2c enclosures I'm pretty happy with it, I have 2 16 * 400GB disk servers, and a few other folks I know have them (at least one 16*500GB). One feature I particularly like is you can upgrade the BIOS from a webbrowser. The driver has been in the linux kernel for awhile, some helpful folks even run a yum repository with the areca driver support. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Fri Dec 9 01:20:47 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 8 Dec 2005 17:20:47 -0800 (PST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051209000447.GH2948@cse.ucdavis.edu> Message-ID: <20051209012047.87222.qmail@web34115.mail.mud.yahoo.com> Bill Broadley wrote: > I just talked to Maurice offline. There are various pci-e > solutions (JBOD, FRAID, and real RAID) that are (and have been) > available. I've been building 16 disk servers out of the > Areca-1260. > The 1260 has: > * x8 pci-e bus (2GB/sec + 2GB/sec) FYI, I was under the impression that each x1 channel is 0.125GBps bi-directional (0.25GBps effective), meaning an x8 is 1.0GBps bi-directional (2.0GBps effective). > * 256MB dimm > * Intel RAID 6 engine (forget the part number) IOP332 or IOP333 -- 400+MHz X-Scale (superscalar ARM). They kick serious @$$ and prevent all that redundant load going back and forth from I/O to memory to CPU back to memory and down I/O again, just to do a XOR calculation (the XOR itself is nothing, the CPU can do them quickly). If you're building a system that is a dedicated storage device, that's one thing, and you use software RAID -- but if you're building a server that is servicing clients, it's nice to keep the storage processing from hogging I/O that could be used for network and other services. Intel is planning to start putting the XScale logic right in the southbridge chips on its server solutions, so then OS-based RAID (leveraging the southbridge) would be far more viable. > * Optional battery backup > * Network interface for management > * Can manage i2c enclosures > I'm pretty happy with it, I have 2 16 * 400GB disk servers, > and a few other folks I know have them (at least one 16*500GB). How are the Linux drivers and user-space support? I've never used the Areca so I'm very interested. My use of 3Ware 7000/8000 series is more about the proven drivers and user-space support. I'd be happy to find a PCIe solution that is equal to the 3Ware for PCI-64/X. > One feature I particularly like is you can upgrade the BIOS > from a webbrowser. > The driver has been in the linux kernel for awhile, some > helpful folks even run a yum repository with the areca driver > support. Good to know. But how is the user-space support? -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From bill at cse.ucdavis.edu Fri Dec 9 01:49:40 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Thu, 8 Dec 2005 17:49:40 -0800 Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051209012047.87222.qmail@web34115.mail.mud.yahoo.com> References: <20051209000447.GH2948@cse.ucdavis.edu> <20051209012047.87222.qmail@web34115.mail.mud.yahoo.com> Message-ID: <20051209014940.GN2948@cse.ucdavis.edu> On Thu, Dec 08, 2005 at 05:20:47PM -0800, Bryan J. Smith wrote: > FYI, I was under the impression that each x1 channel is > 0.125GBps bi-directional (0.25GBps effective), meaning an x8 > is 1.0GBps bi-directional (2.0GBps effective). Your impression is wrong. For a pretty reasonable explanation check: http://en.wikipedia.org/wiki/Pci-e A few quotes: First-generation PCIe is often quoted to support a data-rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signalling-rate (2500 Mbaud) divided by the encoding overhead (10bits/byte.) ... Note that the datarate already includes the overhead (10 bits per byte). As for real world practical bandwidth: Like other high-speed serial interconnect systems, PCIe has significant protocol and processing overhead. Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (channel) data-rate. Because the channel is full duplex it's easier to get a higher percentage of the full bandwidth (unlike PCI-x.) > They kick serious @$$ and prevent all that redundant load > going back and forth from I/O to memory to CPU back to memory > and down I/O again, just to do a XOR calculation (the XOR > itself is nothing, the CPU can do them quickly). If you're > building a system that is a dedicated storage device, that's > one thing, and you use software RAID -- but if you're > building a server that is servicing clients, it's nice to > keep the storage processing from hogging I/O that could be > used for network and other services. We have had this discussion before. Not sure how a few hundred MB/sec of I/O is supposed to eat up 4GB/sec of Pci-e bandwidth. Keep in mind that PCI-e isn't shared. To talk to for instance another PCI-e device you are using seperate lines (again unlike pci-x). So PCI-e is a point to point duplex connection, reads and writes from multiple devices do not compete for bandwidth, nor do they have to negotiate for the bus. To avoid a repeat of previous arguments please post ACTUAL numbers showing the superiority of hardware RAID. I don't deny it's possible, but without real numbers the rest is hand waving. I've sustained well over a GB/sec of I/O with an opteron system, I've not experienced the "hogging I/O" problem. > How are the Linux drivers and user-space support? I've never The linux drivers seem fine, I've not played with the user-space tools, so far just the web interface. I know they are out there but I've not used them. Can anyone else on the list comment? > used the Areca so I'm very interested. My use of 3Ware > 7000/8000 series is more about the proven drivers and > user-space support. I'd be happy to find a PCIe solution > that is equal to the 3Ware for PCI-64/X. With software RAID I can't tell the difference between 3ware and areca, I've don't have any extensive production use of hardware RAID on either. Not since I lost a few filesystems to a buggy 3ware hardware raid driver back in the 6800 days. Of course 3ware has gotten much better since then. > Good to know. But how is the user-space support? Sorry, no experience. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Fri Dec 9 02:56:01 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 8 Dec 2005 18:56:01 -0800 (PST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051209014940.GN2948@cse.ucdavis.edu> Message-ID: <20051209025601.33536.qmail@web34103.mail.mud.yahoo.com> Bill Broadley wrote: > Your impression is wrong. For a pretty reasonable > explanation check: > http://en.wikipedia.org/wiki/Pci-e Actually, I don't always trust Wikipedia, so I hit the PCI Standards Group (PCISG). Sure enough, you were correct: http://www.pcisig.com/news_room/faqs/faq_express/ 250MBps in each direction. So PCIe x8 is, indeed, 2.0GBps + 2.0GBps. So I stand corrected. But, FYI, the Areca 1200 series uses the Intel IOP332 which actually uses a PCI-X to PCIe bridge internally, with the hardware on the PCI-X side (http://developer.intel.com/design/iio/iop332.htm). So I'm sure they are getting no where near that. But it's quite enough to handle any stream of data, regardless. > Because the channel is full duplex it's easier to get a > higher percentage of the full bandwidth (unlike PCI-x.) Yes, I've noted a few benchmarks of the Areca 1100 series (PCI-X 1.0) versus the 1200 series (PCIe x8) showing the latter to be 5-10% faster. > We have had this discussion before. Not sure how a few > hundred MB/sec of I/O is supposed to eat up 4GB/sec of > Pci-e bandwidth. Because you're not writing several, additional and redundant copies to the disk in the case of RAID-1 or 10, or pushing every single bit from memory to CPU back to memory just to calculate parity. It may seem like 200-300MBps doesn't make a dent on I/O or CPU interconnect that is in the GBps, but when you're moving enough redundant copies around for just mirroring or XORs, it does detract from what else your system could be doing. I agree it's getting to the point that hardware RAID is less and less of a factor, as the I/O and CPU-memory interconnects can cope much better. But before PCI-X (let alone PCIe or Opteron/HyperTransport) and multi-GBps memory channels it was pretty much a major factor. > Keep in mind that PCI-e isn't shared. To talk to for instance > another PCI-e device you are using seperate lines (again unlike > pci-x). I know, that's why I want to see more PCIe hardware RAID. The Areca's are very nice, since Intel offers the IOP332 (and IOP333) with a built-in PCI-X to PCIe bridge (although it would be nice if they were PCIe native), which they use . Supposedly the 3Ware/AMCC 9550SX PowerPC controller is also capable of PCIe, like > So PCI-e is a point to point duplex connection, reads and > writes from multiple devices do not compete for bandwidth, > nor do they have to negotiate for the bus. Yes, I know, that's why PCIe is a great option short of having dual (or even quad, let alone 6) PCI-X channels. Any Opteron solution with the AMD8131 has 2 PCI-X channels. The HP DL585 has 4, the Sun Sunfire v40z has 6 (4 slots at the full 133MHz PCI-X. There is also the AMD8132 which is PCI-X 2.0 (266MHz), although it's adoption is surely to be limited by the growing adoption of PCIe. Once PCIe x4 and x8 cards become commonplace (which is almost true), there will be little need for PCI-X other than legacy support. I'm still hoping to see a low-cost PCIe x4 SATA RAID card, 2 or 4 channel, because the current entry point seems to be $400+. > To avoid a repeat of previous arguments please post ACTUAL > numbers showing the superiority of hardware RAID. I don't > deny it's possible, but without real numbers the rest is hand > waving. I've sustained well over a GB/sec of I/O with an > opteron system, I've not experienced the "hogging I/O" problem. Most of the reviews I've seen have not done so. At the most the CPU utilization and RAID-5 performance is where software RAID can't keep up, because they are trying to push down redundant copies (RAID-1 and 10) or deal with pushing a lot of data through the CPU just for XOR, and can't meet the Areca 1100/1200. There were some reviews at GamePC.COM in the summer/fall, but they leave much to be desired (especially since their 3Ware 9500S tests were running with the old, buggy firmware). I'm still waiting for someone to show off a quality, multi-GbE setup with dozens of NFS clients, and some sustained, multi-client bonnie benchmarks. I don't deny that a quality dual-Opteron with these new PCIe x4/x8 cards are making software more and more viable versus hardware. I would really like to see a serious set of benchmarks myself to show how much the gap has widened (if not been virtually eliminated). > The linux drivers seem fine, I've not played with the > user-space tools, so far just the web interface. How is the web interface? What do you have to install on the host to get to it? > I know they are out there but I've not used them. Can > anyone else on the list comment? > With software RAID I can't tell the difference between > 3ware and areca, You mean you're using the Areca for software RAID? You're throwing away all it's XScale power? If so, why don't you just use a cheaper card with 4 or 8 SATA channels? I noticed HighPoint now has one for PCIe, and Broadcom should as well (or at least shortly). It would probably be better performing because the XScale won't be in the way. Same deal on PCI-64/X when anyone uses 3Ware with software RAID, you'd be far better off going with a Broadcom RAIDCore if you're doing software RAID. The Broadcom doesn't have an ASIC/microcontroller between the bus and channels. E.g., the most common reason I see people say they went 3Ware was for hot-swap support. But hot-swap doesn't work well _unless_ you use its hardware RAID, hiding the individual disks from the OS behind it's hardware ASIC. That seems to be a repeat issue. > I've don't have any extensive production use of hardware > RAID on either. Not since I lost a few filesystems to a > buggy 3ware hardware raid driver back in the 6800 days. If you mean the RAID-5 support, 3Ware was _stupid_ to update the 6000 series to support RAID-5. It was _only_ designed for RAID-0, 1 and 10 -- hence why the new 7000 series was quickly introduced. But they did it to appease users, not smart IMHO. > Of course 3ware has gotten much better since then. I don't think they've ever been "bad," but they've done some _stupid_ (technically speaking ;-) things at times. Even I _never_ adopt a new 3Ware firmware release until I've seen it "well received." E.g., the 3Ware 9.2 firmware had a bug that killed write performance if you had more than 1 array. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From bill at cse.ucdavis.edu Fri Dec 9 04:45:35 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Thu, 8 Dec 2005 20:45:35 -0800 Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051209025601.33536.qmail@web34103.mail.mud.yahoo.com> References: <20051209014940.GN2948@cse.ucdavis.edu> <20051209025601.33536.qmail@web34103.mail.mud.yahoo.com> Message-ID: <20051209044535.GT2948@cse.ucdavis.edu> On Thu, Dec 08, 2005 at 06:56:01PM -0800, Bryan J. Smith wrote: > But, FYI, the Areca 1200 series uses the Intel IOP332 which > actually uses a PCI-X to PCIe bridge internally, with the > hardware on the PCI-X side > (http://developer.intel.com/design/iio/iop332.htm). So I'm > sure they are getting no where near that. But it's quite > enough to handle any stream of data, regardless. That does not necessarily mean that it's only 1.0GB/sec (total for reads and writes). Since it's inside the controller and is of a known configuration (the bus has exactly 2 loads and is extremely short) it could easily be faster. I know the early AGP bridges for AGP native GPUs on pci-e ran at double speed to provide 4GB/sec instead of 2. After all why would intel design for an 8x slot if they only needed a 4x? In either case with only 16 disks even 1.0 GB/sec isn't going to be the limit. > Because you're not writing several, additional and redundant > copies to the disk in the case of RAID-1 or 10, or pushing > every single bit from memory to CPU back to memory just to > calculate parity. It may seem like 200-300MBps doesn't make > a dent on I/O or CPU interconnect that is in the GBps, but > when you're moving enough redundant copies around for just > mirroring or XORs, it does detract from what else your system > could be doing. Well with the memory system being 6.4GB/sec and the hypertransport connections being 8.0GB/sec. Say your writing 100MB to a 5 disk RAID-5. In the hardware RAID case: 100MB read from ram -> CPU copies it to the I/O space of the controller -> controller calculated raid-5 checksums -> 125 MB is written to the disks. Software RAID: 100MB read from ram -> cpu copies and checksums 125 MB to the controller -> controller writes 125 MB to the disks. So yes your 4GB/sec I/O bus sees another 25MB/sec so it's 1.25% more busy. Is that a big deal? Say it was a mirror, it's 5% more busy. So? I just checked my fileserver, it can RAID-5 checksum at 7.5GB/sec. So yes one cpu would be slightly more busy, just a few %. > I agree it's getting to the point that hardware RAID is less > and less of a factor, as the I/O and CPU-memory interconnects > can cope much better. But before PCI-X (let alone PCIe or > Opteron/HyperTransport) and multi-GBps memory channels it was > pretty much a major factor. I never noticed, I've done many comparisons between hardware and software RAID. Even a small fraction of an opteron or p4 is a substantially larger resource then the controllers which even today are only 32 bit and a few 100 MHz. > I know, that's why I want to see more PCIe hardware RAID. > The Areca's are very nice, since Intel offers the IOP332 (and > IOP333) with a built-in PCI-X to PCIe bridge (although it > would be nice if they were PCIe native), which they use . I'd like native as well, asthetically. It won't necessarily make a performance difference. Video cards deal with higher bandwidths then disks and the only difference I noticed in the benchmarks and reviews is that the pci-e native take a few less watts than the pci-e -> bridge -> agp. The 6600GT for instance requires a power connector for AGP and none for pci-e. > Supposedly the 3Ware/AMCC 9550SX PowerPC controller is also > capable of PCIe, like Strange, they don't mentioned anything like that on the 9550sx page. In any case the more the merrier, more competition is a good thing for the consumer. > Any Opteron solution with the AMD8131 has 2 PCI-X channels. > The HP DL585 has 4, the Sun Sunfire v40z has 6 (4 slots at > the full 133MHz PCI-X. There is also the AMD8132 which is > PCI-X 2.0 (266MHz), although it's adoption is surely to be > limited by the growing adoption of PCIe. Indeed, I'm puzzled by suns move to pci-x 2.0. Especially when it seems like the market has already moved to pci-e. HP, IBM, Tyan, Supermicro, Myrinet and related are all supporting Pci-e. It's great for consumers, for relatively cheaply you can get TWO 8GB/sec I/O slots on a motherboard. Sure you could buy a HP quad or a Sun quad... but I certainly can't afford one for home. > Once PCIe x4 and x8 cards become commonplace (which is almost > true), there will be little need for PCI-X other than legacy > support. I'm still hoping to see a low-cost PCIe x4 SATA > RAID card, 2 or 4 channel, because the current entry point > seems to be $400+. If you want cheap I'd switch to software RAID. I've seen pci-e 2 channel controllers for $60 or so. Or just get a new motherboard getting 8 ports on the motherboard is fairly easy on a $100-$150 motherboard. > Most of the reviews I've seen have not done so. At the most > the CPU utilization and RAID-5 performance is where software > RAID can't keep up, because they are trying to push down My experience is exactly the opposite, I've been shocked how many hardware RAIDs couldn't manage a sustained 50MB/sec for writes. > > The linux drivers seem fine, I've not played with the > > user-space tools, so far just the web interface. > > How is the web interface? Nice. It allows creating/destroying RAIDs, shows temperatures, allows you to blink a drives activity light to find it. Supports at least RAID 0,1,5, and 6. I didn't really check for 10. Both JBOD and passthru work well ;-). > What do you have to install on the host to get to it? Nothing, just connect a network cable to the RAID card. > You mean you're using the Areca for software RAID? You're > throwing away all it's XScale power? Heh, all what 333 MHz of 32 bit power roaring inside my RAID card? I want it to be managed just like all my other RAIDs, I want it to be able to migrate it to any of my other servers, and I want the monitoring to be as easy as crontab entry: diff /var/mdstat /proc/mdstat So that I get an email if there is any change. Sure I could install the Areca tools, use hardware RAID, setup some areca specific documentation, disaster recovery plan, monitoring, and management. Then forever tie that data and those disks to a specific type of controller. Of course then I'd have to do it for my, er, dunno 5 other kinds of hardware RAID cards. Or I could treat it just like every other software RAID I manage. In the case of disaster I can easily troubleshoot with any hardware that has enough SATA ports. > If so, why don't you just use a cheaper card with 4 or 8 SATA > channels? I noticed HighPoint now has one for PCIe, and > Broadcom should as well (or at least shortly). At the time for 16 ports the areca seemed easiest (single card), I'm certainly watching for cheaper solutions. > It would probably be better performing because the XScale > won't be in the way. Same deal on PCI-64/X when anyone uses > 3Ware with software RAID, you'd be far better off going with I've benchmark many 3ware hardware RAIDs that were slower than software raid on the same hardware. I've not benchmarked the new 9550sx though. If they bring out a pci-e version maybe I'll get the chance. > a Broadcom RAIDCore if you're doing software RAID. The > Broadcom doesn't have an ASIC/microcontroller between the bus > and channels. I don't see any reason why the XScale should slow anything down, all it has to do is copy data from PCIe to the sata interface. Certainly the rate it can do that should be higher than the rate it can do RAID-5 calculations. > E.g., the most common reason I see people say they went 3Ware > was for hot-swap support. But hot-swap doesn't work well > _unless_ you use its hardware RAID, hiding the individual > disks from the OS behind it's hardware ASIC. That seems to > be a repeat issue. Agreed, I'm not hot-swapping. I just have 3 5 disk RAIDs and one global spare. So during a failure I rebuild on the 16th disk and then do the swaps during the next maintenance window. Even 3ware's hotswap isn't perfect, I've seen disks get confused enough to hang the controller. Not that 3ware doesn't do it better than most of the non-RAID sata drivers controllers. > If you mean the RAID-5 support, 3Ware was _stupid_ to update > the 6000 series to support RAID-5. It was _only_ designed > for RAID-0, 1 and 10 -- hence why the new 7000 series was > quickly introduced. But they did it to appease users, not > smart IMHO. Heh, I got bitten by that one, after a fair amount of load testing I put them in production use. Lost a filesystem. 3ware claimed a fix would work. A month later the same thing. I've heard of other various BIOS/driver bugs since then. Software RAID on the other hand seems pretty bulletproof, it's widely tested, and very robust. I regularly have 400-500 day uptimes on busy production servers. > > Of course 3ware has gotten much better since then. > > I don't think they've ever been "bad," but they've done some > _stupid_ (technically speaking ;-) things at times. Even I I consider selling an expensive RAID card that is completely broken and writes corrupt data to the drive bad. I trusted them and was betrayed. Certainly hindsight is 20/20, shouldn't have done that. > _never_ adopt a new 3Ware firmware release until I've seen it > "well received." E.g., the 3Ware 9.2 firmware had a bug that > killed write performance if you had more than 1 array. Caution is warranted, more caution still (IMO) is to use software RAID. Hopefully 3ware can read/write a block without errors, and the rest I trust to software RAID. I also find the mdadm functionality quite desirable when compared to most hardware raid interfaces. Even if I found the exact functionality equivalent, it would still be specific to that controller. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Fri Dec 9 08:08:23 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 9 Dec 2005 00:08:23 -0800 (PST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051209044535.GT2948@cse.ucdavis.edu> Message-ID: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> Bill Broadley wrote: > After all why would intel design for an 8x slot if they > only needed a 4x? Don't know. Commonality? Most storage controllers are x8. BTW, there are two (2) PCI-X busses bridged in the IOP332 (and IOP333) to the PCIe x8 channels. The specs very much do say 133MHz PCI-X. ;-> > In either case with only 16 disks even 1.0 GB/sec isn't > going to be the limit. Agreed. > In the hardware RAID case: 100MB read from ram -> CPU > copies it to the I/O space of the controller -> > controller calculated raid-5 checksums -> 125 MB is written > to the disks. No, the CPU is virtually not involved other than to command/queue -- _no_ programmed I/O (PIO), only Direct Memory Access (DMA). 100MB is read from RAM and written directly as 100MB to memory mapped I/O (which is the block device) by the PCI-X or PCIe DMA controller. The on-board controller handles any mirroring/parity. > Software RAID: > 100MB read from ram -> cpu copies and checksums 125 MB to > the controller -> controller writes 125 MB to the disks. For mirroring, it's straight-forward (at least still DMA, just a redundant write): 100MB is read from RAM and written to two different memory mapped I/O (which is the block device) by the PCI-X or PCIe DMA controller. For RAID-5, it's a little more interesting, it's PIO: The CPU reads in 100MB from RAM and calculates XOR, writing the XOR parity calculated to memory -- e.g., 25MB for a 5 disk RAID-5. Then the 125MB is read from RAM and written directly as 125MB to memory mapped I/O by the PCI-X or PCIe DMA controller. ANAL NOTE: The software RAID-5 would commit a fraction of the data as a fraction of the XOR is calculated and stored in memory, and wouldn't wait until all XORs have been calculated and stored in memory. But still, the XOR operation is programmed I/O, requiring that parity not be committed until it has been calculated by the CPU. > So yes your 4GB/sec I/O bus sees another 25MB/sec so it's > 1.25% more busy. Is that a big deal? > Say it was a mirror, it's 5% more busy. So? Re-read the above analysis. For mirror, you push 2x over the interconnect, but at least it's still 100% DMA (no CPU overhead). For RAID-5, you only push 1/(N-1) over the interconnect (e.g., 1/(5-1) = 1/4th = 25% for a 5-disc RAID-5), but you push the _entire_ amount of data through the CPU for that extra write. > I just checked my fileserver, it can RAID-5 checksum at > 7.5GB/sec. So yes one cpu would be slightly more busy, > just a few %. I'm sorry, but the 3-issue ALU of the Opteron can_not_ do 7.5 billion LOAD (the slowest part), FETCH-DECODE-XOR (the most simplistic part) and STOR operations per second! It's much faster and far more efficient to do XOR with a dedicated ASIC or ASIC peripheral on a superscale I/O processor that is in-line and far closer to the actual storage channels. > I never noticed, I've done many comparisons between > hardware and software RAID. Even a small fraction of an > opteron or p4 is a substantially larger resource then the > controllers which even today are only 32 bit and a few 100 > MHz. When I started deploying some of my first ServerWorks ServerSet III chipset mainboards about 5-6 years ago for P3/Xeon, I saw significant gains with 3Ware cards as well as StrongARM-based SCSI RAID cards at RAID-10 over software RAID to ATA or SCSI channels. > Video cards deal with higher bandwidths then disks Video is an even better example of why CPU processing sucks for specialized operations! And video cards have dedicated Graphics Processor Unit (GPU) processors that manipulate data far better than any vector processing on a CPU. Same deal with storage I/O when it comes to ASICs and/or I/O Processors (IOP) -- they are built for handling data streams and real-time XORs than CPUs with traditional LOAD-FETCH/DECODE/XOR-STOR. Why do you think Intel is putting its XScale logic in forthcoming bridges? Intel learned long ago that processing with local memory closer to the end-device is going to be far higher performing because of no redundant copying/processing, reduced latency, etc... > Strange, they don't mentioned anything like that on the > 9550sx page. Why would engineering specifications for AMCC's PowerPC products it be on the 9550SX product page?!?!?! That's like Areca putting the engineering specifications of the Intel's XScale IOP products on their ARC 1100/1200 product pages!!! You go to developer.intel.com, _not_ areca.com! Go look up AMCC's PowerPC product lines -- that's what the GamePC.COM reviewers did! Their bus arbitrator and RAID engine micontrollers has PCI-X and PCIe support. Not strange at all! In fact, most of the time, end-user product vendors hide things, and it takes me awhile to track down the information and get to the actual engineering specification sheets of the original microelectronics vendor. In the case of Areca, that's Intel. In AMCC's case, they _are_ the lead developer of the PowerPC 400 series now (IBM remains as the foundary), so it's themselves -- but still a different set of pages (because they are _engineering_-level, _not_ product level). I'm sure AMCC wanted to get the 9550SX products out first in PCI-X before they tackled PCIe. Just like Areca's 1100 series line came out well in advance of their 1200 series. And 3Ware/AMCC has a history of _never_ announcing products in advance. > Indeed, I'm puzzled by suns move to pci-x 2.0. I'm sure their consumers want it -- at least a number. But Sun _does_ use the nForce Pro 2200/2050 chips on some of their newer models. > It's great for consumers, for relatively cheaply you can > get TWO 8GB/sec I/O slots on a motherboard. I agree, the traces are drastically reduced, which is the big cost with PCI-X. > If you want cheap I'd switch to software RAID. I've seen > pci-e 2 channel controllers for $60 or so. Or just get a new > motherboard getting 8 ports on the motherboard is fairly easy > on a $100-$150 motherboard. PCIe x1? No thanx, not for a server. > My experience is exactly the opposite, I've been shocked > how many hardware RAIDs couldn't manage a sustained > 50MB/sec for writes. That's because they use old i960 IOP30x controllers that weren't even viable 5 years ago. Many Tier-1 PC OEMs _still_ ship controllers with those -- quite sad. > Nice. It allows creating/destroying RAIDs, shows > temperatures, allows you to blink a drives activity light to > find it. Supports at least RAID 0,1,5, and 6. I didn't really > check for 10. Both JBOD and passthru work well ;-). > Nothing, just connect a network cable to the RAID card. Now that's sweet! Card-based web administration! Very nice. > Heh, all what 333 MHz of 32 bit power roaring inside my > RAID card? You're _already_ using the XScale -- even though you're using it in JBOD mode doing software RAID, _all_ data is going through it! You're better off using standard SATA channels on a Broadcom, HighPoint or some other card that does _not_ have intelligence in between the bus and the SATA channels. Why? Did you think that if you use JBOD, the XScale isn't used? Don't you know how these cards work?!?! BTW, with regards to the 333MHz, no offense, but you're what us semiconductor design engineers call a "MHz whore." Maybe it's because I've spend several years of my career designing memory and bus controllers at the layout level, but there is a _huge_ difference between a CPU and a microcontroller with ASICs designed specifically for something. In the case of the IOP33x superscalar ARM XScales, they are very much designed to efficiently put a data stream to many disks. > So that I get an email if there is any change. Sure I > could install the Areca tools, use hardware RAID, setup some > areca specific documentation, disaster recovery plan, > monitoring, and management. Then forever tie > that data and those disks to a specific type of controller. Sigh, now you're just argumentative. I've heard this argument over and over, and yet, it has never stuck. E.g., Several vendors drivers work with the stock kernel facilities and support many user-space utilities (e.g., smartd). The facilities for e-mail notification can be used to trap kernel messages with regards to various cards. Many vendors are conscience of Linux kernel and user-space integration -- 3Ware, Broadcom and LSI Logic have always been. You can choose the vendor utilities, or you can stick with the Linux utilities you know. Everytime I bring up 3Ware's smartd support, or the fact that kernel messages match to the utilities catch them, etc... -- I just get _ignored_. Why? Because 3Ware offers another option, and that's bad--*BAD* they tell me! ;-> It's like slamming nVidia for 3D drivers, when they actively support open source 2D drivers more than just about any other vendor. Don't knock nVidia when they offer more open source than most anyone else in the video card space, just because they offer additional that is not. > At the time for 16 ports the areca seemed easiest (single > card), I'm certainly watching for cheaper solutions. There's something you continue to miss here, on a _real_ hardware RAID card, the ASIC/microcontroller is "in the way" of the SATA channels. You would be better off going with a Broadcom, HighPoint or other solution -- I know HighPoint now has an 8-channel SATA card for PCIe. > I've benchmark many 3ware hardware RAIDs that were slower > than software raid on the same hardware. I've not benchmarked > the new 9550sx though. The 9500S had some nasty performance bugs in the 9.2 firmware if you had more than 1 array. If you had some JBODs on the same card, I'm sure you saw it. > I don't see any reason why the XScale should slow anything > down, all it has to do is copy data from PCIe to the sata > interface. But you're not sending data directly from the bus to the SATA channels. You're sending data to be buffered in the DRAM on the card, then the XScale puts it to the channels -- kinda like "Store'n Forward." In essence, you're getting the _same_performance_ as if the XScale was enabled for hardware RAID! But now you're putting the overhead on your CPU-memory interconnect, instead of on the XScale and its DRAM, which it's built for. This is what I'm talking about -- you're using hardware RAID how it is _not_ efficient. It's not really "slowing things down" -- but you're basically removing any reason for having the XScale. The XScale is going to handle the RAID operations in real-time, that's what it's ASIC peripherals around its core were designed for. Your CPU was designed to be generic -- and it excels at many different things in moderation. It sucks at doing one thing a lot -- especially wasteful is something as simple as XOR, and the whole LOAD-FETCH/DECODE/XOR-STOR cycle. > Certainly the rate it can do that should be higher than the > rate it can do RAID-5 calculations. Umm ... NOT! That's the problem, you don't understand how these IOPs work! They are *NOT* like traditional processors! They are superscalar microcontrollers with specialized ASIC peripherals that work on data in-line with their transfer. Remember that old RC5 crack contest and the EFF's entry that cracked it in just over a day? They build a specialized system with specialized, microcontroller-based ASICs that not only crunched the keys faster, but it was designed to fed them in more efficiently too. > Even 3ware's hotswap isn't perfect, I've seen disks get > confused enough to hang the controller. Because you have it in JBOD mode! That's really an OS issue, something 3Ware hides from the OS when you use its hardware RAID facilities. This is the stuff you software RAID advocates just don't get. You complain about 3Ware's hot-swap, but that's because you're using it in a way that it was NOT designed! And the problem with the hot-swap is because the OS can see the "individual disk" -- something that is _only_ when you use JBOD. It's a problem with the OS, _not_ the 3Ware card! ;-> > Not that 3ware doesn't do it better than most of the > non-RAID sata drivers controllers. Again, this is where I want to take a baseball bat and knock all you software RAID advocates over because you don't know the first thing about how hot-swap works on a 3Ware card. The only reason 3Ware is better in JBOD is because the 3Ware SATA controller buffers commands/reads/writes, whereas "dumb block" SATA controllers can't. So while the "dumb block" SATA controller says "um, I can't read/write" immediately back to the kernel, the 3Ware controller is going, "yeah, I've got that command queued, hold on" while the disk isn't available. But that doesn't mean that the Linux kernel will go "where the fsck did my disk go?" if you don't swap it very quickly when you have it in JBOD mode. 3Ware can do _nothing_ about that -- that's a 100% kernel issue! The 3Ware card basically just "keeps the kernel busy" queuing commands as long as it can. Hot-plug is supposed to address this, but it only goes so far. 3Ware _only_ offers hot-swap _when_ you use non-JBOD modes! > Software RAID on the other hand seems pretty bulletproof, > it's widely tested, and very robust. I regularly > have 400-500 day uptimes on busy production servers. Excuse me? MD has changed several times between 2.2, 2.4 and yet again with 2.6. LVM2 is a major problem, with massive race conditions. I've also had to spend a _lot_ of _manual_ time dealing with MD not finding the appropriate disks for a volume set. I have to heavily document things for _each_ system so we're not spending 15+ minutes trying to get a volume remounted/rebuilt. Especially on legacy BIOS/DOS disk labels, and LVM/LVM2 disk labels are not always safe (especially not LVM2). Stuff I _never_ worry about with hardware RAID and true hot-swap. > I consider selling an expensive RAID card that is > completely broken and writes corrupt data to the drive bad. > I trusted them and was betrayed. Certainly hindsight is 20/20, > shouldn't have done that. No offense, but if I had a dime for everytime I saw someone on the MD or LVM support lists say "this should work" and then they had to come back and say, "yeah, you'll have to re-create that" I'd be a very, very rich man. > Caution is warranted, more caution still (IMO) is to use > software RAID. Again, I've had so much toasted data thanx to software RAID I refuse to touch it. Everytime I try it, it's something new. Now mdadm has helped _drastically_ when it comes to managing MD, but it's still far from perfect. > Hopefully 3ware can read/write a block without errors, and > the rest I trust to software RAID. I also find the mdadm > functionality quite desirable when compared to most hardware > raid interfaces. I disagree. mdadm is much, much better than the old mdtools, but it's still not as nice as some of the hardware RAID products I trust. > Even if I found the exact functionality equivalent, it would > still be specific to that controller. Not an issue when you _only_ use controllers from 1-2 vendors that have excellent support histories. 3Ware has been superb on upward compatibility. And their 7000/8000 series were pretty flawless in my book. I've avoided the 9500S and the 9550SX is too new for me to consider. But for high-performance RAID-10, the 7000/8000 are just absolutely dreamy -- and have been for almost 5 years. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From b.j.smith at ieee.org Fri Dec 9 09:09:58 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 9 Dec 2005 01:09:58 -0800 (PST) Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> Message-ID: <20051209090958.90946.qmail@web34104.mail.mud.yahoo.com> "Bryan J. Smith" wrote: > I'm sorry, but the 3-issue ALU of the Opteron can_not_ do > 7.5 billion LOAD (the slowest part), FETCH-DECODE-XOR (the > most simplistic part) and STOR operations per second! > ... > BTW, with regards to the 333MHz, no offense, but you're > ... there is a _huge_ difference between a CPU and a > microcontroller with ASICs designed specifically for > something. Why use a layer-3 switch? I mean, a layer-2 switch with a router on one port would be cheaper? Why put in a HBA for FC-AL or iSCSI? I mean, with a fast enough system, GbE is only about 100MBps, so why can't a system handle it? After all, TCP segmentation is only like addresses of windows into packets of a larger blocks -- much like sectors are to the windows of partitions on a storage array? Especially when these powerful, costly layer-3 switches only have a 200, 300 ... maybe 400MHz MIPS or ARM/XScale at their cores? So why would anyone consider an Intel IXP for networking equipment anymore than they would an Intel IOP for their storage equipment? I mean, why wouldn't Intel just sell 1 type of XScale, why do they make different versions for communications/networking (IXP) and system/storage (IOP)? The system can handle it, they don't need anything, ASICs and dedicated peripherals and their specialized instructions are useless. Heck, why aren't I using my server as a layer-3/4 router?! Throw in some 4-port GbE cards and I can do it better and cheaper than a layer-3 switch! So why not? -- Bryan P.S. There is a reason why 3Ware calls the Escalade 7000/8000 a "Storage Switch" -- it's design is what you can expect in a layer-3 router. A microcontroller core with ASIC peripherals and SRAM cache (just like network ASICs around a microcontroller core) designed for one thing, queue, move and replicate data. Now I would _not_ use a 7000/8000 series with only 1-4MB of SRAM for RAID-5 anymore than I would/could use a layer-3 Ethernet switch that only has 1-4MB of SRAM for NAT/PAT and network filtering duties. But there are more advanced cards, just like their are more advanced networking equipment -- all often powered by little more than one RISC/microcontroller core of a few hundred MHz. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From ssy at hpcnc.cpe.ku.ac.th Fri Dec 9 09:52:56 2005 From: ssy at hpcnc.cpe.ku.ac.th (Somsak Sriprayoonsakul) Date: Fri, 09 Dec 2005 16:52:56 +0700 Subject: Using Intel compiler for EM64T on x86_64 Message-ID: <439953F8.5090908@hpcnc.cpe.ku.ac.th> Hello, Does anyone in the list ever use Intel Compiler, which has support for EM64T, on Opteron or Athlon64 machine? Does the generated binary totally compatible with Opteron? Also, how is the optimization being done? Does the program run faster compared with GCC or PGI? Sorry if the question seems too stupid. We're having a cluster machine with some Scientific applications which require Fortran 90 compiler, also it accepts only PGI Compiler or Intel Compiler so we can't use G95. Regards -- ----------------------------------------------------------------------------------- Somsak Sriprayoonsakul Scalable Computing Lab High Performance Computing and Networking Center Kasetsart University ssy at hpcnc.cpe.ku.ac.th ----------------------------------------------------------------------------------- From eugen at leitl.org Fri Dec 9 09:55:48 2005 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 9 Dec 2005 10:55:48 +0100 Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <20051209090958.90946.qmail@web34104.mail.mud.yahoo.com> References: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> <20051209090958.90946.qmail@web34104.mail.mud.yahoo.com> Message-ID: <20051209095548.GO2249@leitl.org> On Fri, Dec 09, 2005 at 01:09:58AM -0800, Bryan J. Smith wrote: > Why use a layer-3 switch? I use one for VLAN isolation for each host, and because it has native intelligence so it can run diagnostics and be remotely administered. For Fast Ethernet, Layer 3 switches are not really expensive. > I mean, a layer-2 switch with a router on one port would be > cheaper? For storage (AoE) a dumb switch would be more than sufficient. > Why put in a HBA for FC-AL or iSCSI? I mean, with a fast > enough system, GbE is only about 100MBps, so why can't a > system handle it? You could certainly roll a pretty performant AoE or a Lustre or PVFS system over a rack of 4-disk servers, especially with a 10 GBit Ethernet uplink. > After all, TCP segmentation is only like addresses of windows > into packets of a larger blocks -- much like sectors are to > the windows of partitions on a storage array? Why would one want to use TCP/IP on a LAN storage network when naked Ethernet frames would do nicely? > Especially when these powerful, costly layer-3 switches only > have a 200, 300 ... maybe 400MHz MIPS or ARM/XScale at their > cores? They're only there to control the ASIC and to do remote administration (ssh, web, Java UI). > Heck, why aren't I using my server as a layer-3/4 router?! > Throw in some 4-port GbE cards and I can do it better and > cheaper than a layer-3 switch! So why not? But for the failure rate (requiring failover) and power requirements there are no reasons I'm aware of. Modern TCP/IP stacks can achieve wire speed on 10 GBit Ethernet -- assuming you don't need the CPU for much else. > P.S. There is a reason why 3Ware calls the Escalade > 7000/8000 a "Storage Switch" -- it's design is what you can > expect in a layer-3 router. A microcontroller core with ASIC > peripherals and SRAM cache (just like network ASICs around a > microcontroller core) designed for one thing, queue, move and > replicate data. Now I would _not_ use a 7000/8000 series > with only 1-4MB of SRAM for RAID-5 anymore than I would/could > use a layer-3 Ethernet switch that only has 1-4MB of SRAM for > NAT/PAT and network filtering duties. But there are more > advanced cards, just like their are more advanced networking > equipment -- all often powered by little more than one > RISC/microcontroller core of a few hundred MHz. Most reasons why software RAID works so well is because cheap CPU and large memory are commodity, and a spare is a lot cheaper and easier to find. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From b.j.smith at ieee.org Fri Dec 9 11:19:14 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 09 Dec 2005 06:19:14 -0500 Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <20051209095548.GO2249@leitl.org> References: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> <20051209090958.90946.qmail@web34104.mail.mud.yahoo.com> <20051209095548.GO2249@leitl.org> Message-ID: <1134127154.4899.10.camel@bert64.oviedo.smithconcepts.com> On Fri, 2005-12-09 at 10:55 +0100, Eugen Leitl wrote: > I use one for VLAN isolation for each host, and because > it has native intelligence so it can run diagnostics > and be remotely administered. For Fast Ethernet, Layer 3 > switches are not really expensive. But you can create VLANs on most layer-2 switches too, you'll just (typically) need a router for them outside the switch. [ NOTE: I'm purposely using this as an example of something I do _not_ agree with -- I agree with you. ;-] > Why would one want to use TCP/IP on a LAN storage network when > naked Ethernet frames would do nicely? Oh no, not the CoRAID marketing again (sigh). @-p When CoRAID's AoE stack is feature complete and matching against typical, enterprise SAN, please come back. ;-> > But for the failure rate (requiring failover) and power requirements > there are no reasons I'm aware of. Modern TCP/IP stacks can achieve > wire speed on 10 GBit Ethernet -- assuming you don't need the CPU > for much else. Really? I was not aware at all. I didn't realize you could push 6,000,000 frames (1,000,000 jumbo frames) and the CPU-interconnect could handle it. > Most reasons why software RAID works so well is because cheap > CPU and large memory are commodity, and a spare is a lot cheaper > and easier to find. What's another $100-300 when you're talking a server costing $1,500+ already? Especially when you have many systems, so a few spares are available? -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From eugen at leitl.org Fri Dec 9 11:31:50 2005 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 9 Dec 2005 12:31:50 +0100 Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <1134127154.4899.10.camel@bert64.oviedo.smithconcepts.com> References: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> <20051209090958.90946.qmail@web34104.mail.mud.yahoo.com> <20051209095548.GO2249@leitl.org> <1134127154.4899.10.camel@bert64.oviedo.smithconcepts.com> Message-ID: <20051209113150.GW2249@leitl.org> On Fri, Dec 09, 2005 at 06:19:14AM -0500, Bryan J. Smith wrote: > Oh no, not the CoRAID marketing again (sigh). @-p > > When CoRAID's AoE stack is feature complete and matching against > typical, enterprise SAN, please come back. ;-> Nono, not CoRAID. I understand they're making their money on disk shelves. Just a bunch of Linux rackmounts with frontal hotpluggable SATA drives (preferrably, four in 1U, so you can stripe over RAID 1 or RAID 5 with a hot spare). I've heard some nasty stories about data corruption in RAID 5, though haven't seen anything in person, yet. > Really? I was not aware at all. I didn't realize you could push > 6,000,000 frames (1,000,000 jumbo frames) and the CPU-interconnect could > handle it. I'm personally not running a cluster, but if you look into the Beowulf list archives, people are citing impressive numbers. > > Most reasons why software RAID works so well is because cheap > > CPU and large memory are commodity, and a spare is a lot cheaper > > and easier to find. > > What's another $100-300 when you're talking a server costing $1,500+ > already? Especially when you have many systems, so a few spares are > available? SoftRAID easily outperforms these $100-300 controllers. And $1000-1500 buys you a couple of servers, or a large pile of SATA disks. As a semi-hobbyist, it's a pretty simple decision to make. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From b.j.smith at ieee.org Fri Dec 9 16:27:03 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 09 Dec 2005 11:27:03 -0500 Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <20051209113150.GW2249@leitl.org> References: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> <20051209090958.90946.qmail@web34104.mail.mud.yahoo.com> <20051209095548.GO2249@leitl.org> <1134127154.4899.10.camel@bert64.oviedo.smithconcepts.com> <20051209113150.GW2249@leitl.org> Message-ID: <1134145623.4899.23.camel@bert64.oviedo.smithconcepts.com> On Fri, 2005-12-09 at 12:31 +0100, Eugen Leitl wrote: > I'm personally not running a cluster, but if you look into the > Beowulf list archives, people are citing impressive numbers. Beowulf performance is _inversely_proportional_ to communication load. Applications scale far more linearly when the slower 10GbE are not used than the local, HyperTransport interconnects (assuming Opteron). From what I've seen, no one is getting anywhere close to 1GBps over layer-2 communication with 10GbE for clustering. I'm talking about when you're pushing enough service data around that you need local interconnect, because 1GBps won't cut it. Beowulf is not even applicable! Furthermore, the new, preferred solution is to use HTX (HyperTransport eXtension) Infinibind cards, which will give you about 1.8GBps real- world interconnect. Infiniband's protocol is far more efficient, plus (in raw transfer terms) it's faster than GbE. But even then, you don't use Beowulf for applications where you're taxiing the interconnect (database, file, etc...). > SoftRAID easily outperforms these $100-300 controllers. And $1000-1500 > buys you a couple of servers, or a large pile of SATA disks. > As a semi-hobbyist, it's a pretty simple decision to make. But not both -- I'm talking about when you are spending about $1,500 on the combination of server and disk, adding another $300 is a simple decision to make. If I just want a 2-disc mirror on a $750-1,000 server, then $100 is also a simple decision to make. A $1,500-2,000+, $300 is nothing. Potent I/O interconnect in a single-socket server -- with server-class 1GbE NIC(s), and a PCI-X or PCIe x4 or x8 slot for storage -- runs you about $750. Using a desktop nForce4's PCI by hijacking it's video PCIe x16 or using its extra PCIe x1 is not server-class -- especially not its NIC. Adding a brainless 2-channel 3Ware Escalade 8506-2 adds maybe $125. Less than 20%. If I'm looking at server-class dual-socket, then I'm looking more $1,500 for the system, maybe $2,000+ after disks. Adding a $300-400 storage controller is also going to be around 20% increase. Well worth it in my experience, especially after MD screw up after MD screw up, especially when it comes to clients supporting themselves. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Fri Dec 9 18:31:34 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 09 Dec 2005 13:31:34 -0500 Subject: Using Intel compiler for EM64T on x86_64 In-Reply-To: <439953F8.5090908@hpcnc.cpe.ku.ac.th> References: <439953F8.5090908@hpcnc.cpe.ku.ac.th> Message-ID: <1134153096.4899.30.camel@bert64.oviedo.smithconcepts.com> On Fri, 2005-12-09 at 16:52 +0700, Somsak Sriprayoonsakul wrote: > Hello, > Does anyone in the list ever use Intel Compiler, which has support > for EM64T, on Opteron or Athlon64 machine? Does the generated binary > totally compatible with Opteron? Also, how is the optimization being > done? Does the program run faster compared with GCC or PGI? > Sorry if the question seems too stupid. We're having a cluster > machine with some Scientific applications which require Fortran 90 > compiler, also it accepts only PGI Compiler or Intel Compiler so we > can't use G95. All your questions answered: http://www.swallowtail.org/naughty-intel.html It's not a matter of compatibility, Opteron passes with flying colors, even SSE3. But it's a matter of intentionally detecting the Opteron, and disabling functions -- even in version 8.0. 7.x was far worse. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From hahn at physics.mcmaster.ca Sat Dec 10 05:08:34 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sat, 10 Dec 2005 00:08:34 -0500 (EST) Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <1134145623.4899.23.camel@bert64.oviedo.smithconcepts.com> Message-ID: > > I'm personally not running a cluster, but if you look into the > > Beowulf list archives, people are citing impressive numbers. > > Beowulf performance is _inversely_proportional_ to communication load. that's either tautological or wrong. > Applications scale far more linearly when the slower 10GbE are not used > than the local, HyperTransport interconnects (assuming Opteron). From oranges are easier to peel than apples. what was the question? 10GE is a different category of thing than HT. > what I've seen, no one is getting anywhere close to 1GBps over layer-2 > communication with 10GbE for clustering. I'm talking about when you're myri 10G does. > because 1GBps won't cut it. Beowulf is not even applicable! you have some weird ideas of what Beowulf is. > Furthermore, the new, preferred solution is to use HTX (HyperTransport > eXtension) Infinibind cards, which will give you about 1.8GBps real- are you trying to spell "InfiniPath"? > (in raw transfer terms) it's faster than GbE. But even then, you don't > use Beowulf for applications where you're taxiing the interconnect > (database, file, etc...). afaikt you're claiming that beowulf is low-bandwidth. that's just weird! From hahn at physics.mcmaster.ca Sat Dec 10 06:09:34 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sat, 10 Dec 2005 01:09:34 -0500 (EST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <20051209080823.62593.qmail@web34112.mail.mud.yahoo.com> Message-ID: > > In the hardware RAID case: 100MB read from ram -> CPU > > copies it to the I/O space of the controller -> actually, the CPU PIO's a command packet to the controller which gives the command, size and pointer(s) to the data. the CPU never touches the actual data. IDE, net and scsi controllers are all broadly along these lines. > > controller calculated raid-5 checksums -> 125 MB is written > > to the disks. > > No, the CPU is virtually not involved other than to > command/queue -- _no_ programmed I/O (PIO), only Direct > Memory Access (DMA). 100MB is read from RAM and written > directly as 100MB to memory mapped I/O (which is the block the driver just says "hey, write this blob of 128K at offset X". > > Software RAID: > > 100MB read from ram -> cpu copies and checksums 125 MB to > > the controller -> controller writes 125 MB to the disks. > > For mirroring, it's straight-forward (at least still DMA, > just a redundant write): > > 100MB is read from RAM and written to two different memory > mapped I/O (which is the block device) by the PCI-X or PCIe > DMA controller. just two "hey..." commands. the source data (in host ram) is not replicated. > For RAID-5, it's a little more interesting, it's PIO: no, not really. > The CPU reads in 100MB from RAM and calculates XOR, writing well, the default would be 4x64k (and would consume <100 us assuming no cache hits.) > the XOR parity calculated to memory -- e.g., 25MB for a 5 > disk RAID-5. Then the 125MB is read from RAM and written > directly as 125MB to memory mapped I/O by the PCI-X or PCIe > DMA controller. again, probably smaller pieces, each with a separate command packet to the controller(s). don't forget the reads necessary for sub-stripe or non-stripe-aligned writes. > ANAL NOTE: The software RAID-5 would commit a fraction of > the data as a fraction of the XOR is calculated and stored in > memory, and wouldn't wait until all XORs have been calculated > and stored in memory. But still, the XOR operation is > programmed I/O, requiring that parity not be committed until > it has been calculated by the CPU. which is a trivial concern, since on any commodity system, the host is faster and has more available memory bandwidth than the IO system can manage in the first place. we're talking <100 us for the 5-disk, 64K chunk MD. > For mirror, you push 2x over the interconnect, but at least > it's still 100% DMA (no CPU overhead). > > For RAID-5, you only push 1/(N-1) over the interconnect > (e.g., 1/(5-1) = 1/4th = 25% for a 5-disc RAID-5), but you I think that was garbled - using HW raid saves 1/(n-1) or 20% of the bus bandwidth. which is not a scarce commodity any more. > push the _entire_ amount of data through the CPU for that > extra write. for the xor. which takes negligable time, even assuming no cache hits. > > I just checked my fileserver, it can RAID-5 checksum at > > 7.5GB/sec. So yes one cpu would be slightly more busy, > > just a few %. > > I'm sorry, but the 3-issue ALU of the Opteron can_not_ do 7.5 > billion LOAD (the slowest part), FETCH-DECODE-XOR (the most > simplistic part) and STOR operations per second! for a normal dual-opteron server, sure it can. r5 parity is actually easier than Stream's triad, which a single opteron can run at >4.5 GB/s. stream's reporting ignores write-allocate traffic, but MD does its writes through the cache, therefore can hit 6 GB/s. > It's much faster and far more efficient to do XOR with a > dedicated ASIC or ASIC peripheral on a superscale I/O > processor that is in-line and far closer to the actual > storage channels. it _could_be_ much faster. just as there's no question that GPU's _can_ do 3d graphics transforms faster than the host. whether it makes sense is a very different question, mainly cost. why should I spend 20% extra on my fileservers to leave cycles waste? especially since I could spend that money on more capacity, ram, slightly more "enterpris-y" disks, etc. > When I started deploying some of my first ServerWorks > ServerSet III chipset mainboards about 5-6 years ago for > P3/Xeon, I saw significant gains with 3Ware cards as well as > StrongARM-based SCSI RAID cards at RAID-10 over software RAID you bet. the p3 was in .2-.8 GB/s range, and disks of that era were around 30 MB/s apiece. now the host's memory bandwidth is 12x higher, but disks are only about 2x. > And video cards have dedicated Graphics Processor Unit (GPU) > processors that manipulate data far better than any vector > processing on a CPU. GPUs make sense if you're using your GPU to its limit a lot. certainly if you're a gamer, this is true. does a high-end GPU make a lot of sense for a generic desktop? no, actually, it doesn't - transparent windows is not a great justification for a $500 GPU. > LOAD-FETCH/DECODE/XOR-STOR. Why do you think Intel is > putting its XScale logic in forthcoming bridges? so they have something to point at to justify higher prices. but the main point is that the price will be only a very little bit higher, since transistors are practically free these days (bridges are probably pad-limited anyway). no where near your 20% increase to system price. > Intel learned long ago that processing with local memory > closer to the end-device is going to be far higher performing > because of no redundant copying/processing, reduced latency, > etc... if it can be had for near-zero marginal cost, sure. > > If you want cheap I'd switch to software RAID. I've seen > > pci-e 2 channel controllers for $60 or so. Or just get a > new > > motherboard getting 8 ports on the motherboard is fairly > easy > > on a $100-$150 motherboard. > > PCIe x1? No thanx, not for a server. oh yeah right: "if it's a server, it's needs to be gold plated." but for a 2ch controller, 250+250 really is plenty of bandwidth. > BTW, with regards to the 333MHz, no offense, but you're what > us semiconductor design engineers call a "MHz whore." I'm a "price/measured performance whore". that's why I like MD and dumb SATA controllers. > Maybe it's because I've spend several years of my career > designing memory and bus controllers at the layout level, but > there is a _huge_ difference between a CPU and a > microcontroller with ASICs designed specifically for > something. In the case of the IOP33x superscalar ARM > XScales, they are very much designed to efficiently put a > data stream to many disks. efficient? that's the beauty of commoditized hardware: you get 6 GB/s per opteron whether you need it or not. it's certainly tidier to have the controller perform the XOR, but since the host's CPU is faster and will otherwise sit idle, well... > Excuse me? MD has changed several times between 2.2, 2.4 and > yet again with 2.6. LVM2 is a major problem, with massive MD has had excellent on-disk compatibility (afaikr, but only two superblock versions). LVM is irrelevant. > No offense, but if I had a dime for everytime I saw someone > on the MD or LVM support lists say "this should work" and > then they had to come back and say, "yeah, you'll have to > re-create that" I'd be a very, very rich man. hmm, imagine seeing support traffic on a list devoted to support issues. > I've avoided the 9500S and the 9550SX is too new for me to > consider. But for high-performance RAID-10, the 7000/8000 > are just absolutely dreamy -- and have been for almost 5 > years. interesting - 3ware people say that until the 95xx's, their designs were crippled by lack of onboard memory bandwidth. From b.j.smith at ieee.org Sat Dec 10 12:46:18 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 10 Dec 2005 07:46:18 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: References: Message-ID: <1134218778.5853.105.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-10 at 01:09 -0500, Mark Hahn wrote: > actually, the CPU PIO's a command packet to the controller > which gives the command, size and pointer(s) to the data. > the CPU never touches the actual data. IDE, net and scsi > controllers are all broadly along these lines. Yes, that's DMA. Yes, the CPU sends a PIO command. I didn't realize you were getting so anal on this. I meant that the CPU is _not_ doing PIO for the data transfer. When you do IDE/EIDE, as well as the ATA PIO modes 0-5, you _are_ doing PIO for the data transfer. Only when you are doing ATA DMA is it not. When you use software RAID-3/4/5/6 (anything with an XOR), it very much _is_ doing PIO because _every_ single byte goes through the CPU. > the driver just says "hey, write this blob of 128K at offset X". Yes, when it does the DMA transfer, I don't disagree. > just two "hey..." commands. the source data (in host ram) > is not replicated. But *2x* the load on your interconnect. Can you see me on that one? > no, not really. Not really? 100% of the data is slammed from memory into the CPU -- every single byte is worked on by the CPU _before_ the parity can be calculated. That is Programmed I/O, I don't know how much more it could be PIO than that! That's why I said consider software RAID XORs to be PIO! ;-> > well, the default would be 4x64k (and would consume <100 us > assuming no cache hits.) In any case, you're _not_ slamming 6.4GBps through that CPU for XORs. ;-> If you think so, you obviously don't know the first thing how general purpose, superscalar microprocessors work. > again, probably smaller pieces, each with a separate command packet > to the controller(s). don't forget the reads necessary for > sub-stripe or non-stripe-aligned writes. He he, I was trying to give you the "best case scenario." But yes, if you have to read all the way back through I/O from disk, it gets 10+ times worse. ;-> Thanx for being honest and forthcoming in that regard. ;-> > which is a trivial concern, since on any commodity system, the host > is faster and has more available memory bandwidth than the IO system > can manage in the first place. we're talking <100 us for the 5-disk, > 64K chunk MD. Once again, you're _not_ going to be able to even remotely slam 6.4GBps for XORs through that CPU. ;-> > I think that was garbled - using HW raid saves 1/(n-1) or 20% > of the bus bandwidth. which is not a scarce commodity any more. You just totally ignored my entire discussion on what you have to push through the CPU! A stream of data through the ALU of the CPU -- something _not_ designed with an ASIC peripheral that does that one thing and does it well! Something that you _would_ have on a hardware RAID controller. Even at it's "measly" 300-400MHz would do a heck of a lot more efficiently, without tying up the system. That's why Intel is putting its IOP-ASIC XScale processors into the southbridge of its future server designs. Because there's a lot to be gained by off-loading very simple, specialized operations away from the general purpose microprocessor that handles them far more inefficiently. > for the xor. which takes negligable time, even assuming no cache hits. The XOR -- FETCH/DECODE/EXECUTE -- yes, to a point. The LOAD/STORE? I'm sorry, I think not. Microprocessors are not designed to work on I/O efficiently. Microcontrollers with specific peripheral ASICs are. They can easily best a general microprocessor 10:1 or better, MHz for MHz. That's why Intel has several lines of XScale processors, not just one "general" one. > for a normal dual-opteron server, sure it can. r5 parity is actually > easier than Stream's triad, which a single opteron can run at >4.5 GB/s. > stream's reporting ignores write-allocate traffic, but MD does its > writes through the cache, therefore can hit 6 GB/s. Assuming it's in the L1 cache, _maybe_. At 2.6GHz with a 3-issue ALU and that the XOR operations can be (effectively) completed once per clock cycle in a SIMD operation, that's wishful thinking. > it _could_be_ much faster. just as there's no question that GPU's > _can_ do 3d graphics transforms faster than the host. whether it makes > sense is a very different question, mainly cost. why should I spend > 20% extra on my fileservers to leave cycles waste? But what if you're not wasting cycles? Not wasting precious interconnect? That's the main point I've been making -- what if you're slamming so much higher layer network traffic for services that your CPU and interconnect are already very busy? I don't disagree with you if you have a web server or something. But on a database or file server, no, I'm putting in an IOP (RAID-5) or custom ASIC (RAID-10). > especially since I could spend that money on more capacity, ram, > slightly more "enterpris-y" disks, etc. Nope, I see a balance for my database and file server applications. > you bet. the p3 was in .2-.8 GB/s range, and disks of that era > were around 30 MB/s apiece. now the host's memory bandwidth is 12x > higher, but disks are only about 2x. All the more reason to commit to disk ASAP, get it in its battery-backed DRAM, or capacitor backed SRAM, in case of a system failure. > GPUs make sense if you're using your GPU to its limit a lot. > certainly if you're a gamer, this is true. does a high-end GPU make a lot > of sense for a generic desktop? no, actually, it doesn't - transparent windows > is not a great justification for a $500 GPU. But we're not talking about transparent windows, we're talking about 3D, even if simple 3D! That's what you have when you start doing RAID-5. > so they have something to point at to justify higher prices. Sigh, that's argumentative and I'm not that dumb. > but the main point is that the price will be only a very little bit higher, > since transistors are practically free these days (bridges are > probably pad-limited anyway). no where near your 20% increase to system price. And that's a good thing, it brings the cost down by making it commodity. But that's also a sign that it's useful in the first place. ;-> > if it can be had for near-zero marginal cost, sure. In some applications where your CPU and interconnect aren't fully used, I agree, software RAID is fine from a performance standpoint. But when 20% gains you a 30+% improvement in database and file server performance -- not just a "disk I/O" benchmark, but how quickly I can serve 100+ clients -- I'm going to spend the dough! > oh yeah right: "if it's a server, it's needs to be gold plated." > but for a 2ch controller, 250+250 really is plenty of bandwidth. At 250+250, you're slowing down much of your interconnect just to burst the data through. Remember, just because you have 6.4GBps system interconnect does _not_ mean that it will only less than 5% to send down 250MBps. ;-> Or are you not familiar with how HT-to-PCIe bridges work? ;-> > I'm a "price/measured performance whore". that's why I like MD > and dumb SATA controllers. And if you're doing software RAID, I 100% agree! You don't want to put in a hardware RAID controller where the ASIC or IOP is "in the way." > efficient? that's the beauty of commoditized hardware: you get > 6 GB/s per opteron whether you need it or not. You get 6.4GBps *IDEAL* burst. You do *NOT* slam 6.4GBps of XORs through the CPU. Nor does a burst to a 250MBps downstream I/O bridge take less than 5% of your interconnect's time. ;-> Com'mon, I assumed you were smarter than that. ;-> > it's certainly > tidier to have the controller perform the XOR, but since the host's > CPU is faster and will otherwise sit idle, well... But what if your CPU and interconnect aren't? Again, I'm talking about heavy data manipulation here. > MD has had excellent on-disk compatibility (afaikr, but only two > superblock versions). LVM is irrelevant. So you just use MD on "raw," legacy BIOS/DOS partitions? That's just reciepie for disaster when someone removes a disk. > hmm, imagine seeing support traffic on a list devoted to support issues. I've been on the MD lists a long, long time. > interesting - 3ware people say that until the 95xx's, their designs > were crippled by lack of onboard memory bandwidth. No, it had _nothing_ to do with memory bandwidth. It had to do with memory _size_! Now you're just showing your ignorance. That's the problem with 2nd hand hearsay. I'm used to it with software RAID advocates. ;-> 3Ware 7000/8000 series cards have 1-4MB of SRAM, _not_ DRAM. Do you know the first thing about how SRAM differs from DRAM? It's the same reason why your layer-2 Ethernet switch can resolve MAC addresses at wire-speeds, _unlike_ a PC that does bridging. Same deal with 3Ware's Escalade 7000/8000. It's how it can replicate and resolve striping/mirroring at full volume set speed. That's why they are called "storage switches." But it sucks at XOR buffering, because it's only 1-4MB of SRAM. That's why the 9500S and, now, the 9550SX adds 128+MB of DRAM. Just like other "buffering" RAID cards. At this point, I'm not responding any further. It's clear that you don't want to see my points, even though I see many of yours. And you're making arguments that are not founded on accurate information, or "GHz/GBps whoring" everything. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Sat Dec 10 12:51:48 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 10 Dec 2005 07:51:48 -0500 Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: References: Message-ID: <1134219108.5853.113.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-10 at 00:08 -0500, Mark Hahn wrote: > oranges are easier to peel than apples. what was the question? > 10GE is a different category of thing than HT. Exactly! ;-> > myri 10G does. I'm sorry, I meant 802.3 layer-2, not layer-2 in general (which is abstract). > you have some weird ideas of what Beowulf is. Apparently, because I'm used to using it to distribute apps that scale linearly because they are CPU bound, not I/O bound. > are you trying to spell "InfiniPath"? Actually, I meant InfiniBand technology (I guess I just finished a DNS discussion on another list -- DOH!), including all instances of it. > afaikt you're claiming that beowulf is low-bandwidth. that's just weird! Sigh. I meant it's _lower_ than having a direct HyperTransport interconnect between CPUs -- or some other, scalable interconnect. Beowulf is about cost versus such solutions when you need CPU more than I/O -- or at least at a much lower rate than local interconnects. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From eugen at leitl.org Sat Dec 10 17:39:17 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 10 Dec 2005 18:39:17 +0100 Subject: bonnie++ Sun Fire X2100 on Solaris 10 and CentOS Message-ID: <20051210173917.GQ2249@leitl.org> Here's a bonnie++ benchmark on a 2 GHz Sun Fire X2100 with Solaris 10 (Solaris Express B24) and Hitachi T7K250 (250 GByte, two 133 GByte platters) drives: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP x2100 3G 53846 66 46587 13 6351 2 65372 85 94427 17 263.4 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2881 10 +++++ +++ 22421 68 25269 77 +++++ +++ 3011 13 x2100,3G,53846,66,46587,13,6351,2,65372,85,94427,17,263.4,0,16,2881,10,+++++,+++,22421,68,25269,77,+++++,+++,3011,13 Here's ditto on CentOS 2.6.9-22.EL #1 Sat Oct 8 21:08:40 BST 2005 x86_64 x86_64 x86_64 GNU/Linux ersion 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP localhost.locald 3G 51481 93 53631 17 23445 5 39816 66 63635 7 198.3 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ localhost.localdomain,3G,51481,93,53631,17,23445,5,39816,66,63635,7,198.3,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ Debian AMD64 forthcoming. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From hahn at physics.mcmaster.ca Sat Dec 10 22:59:37 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sat, 10 Dec 2005 17:59:37 -0500 (EST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <1134218778.5853.105.camel@bert64.oviedo.smithconcepts.com> Message-ID: > > actually, the CPU PIO's a command packet to the controller > > which gives the command, size and pointer(s) to the data. > > the CPU never touches the actual data. IDE, net and scsi > > controllers are all broadly along these lines. > > Yes, that's DMA. Yes, the CPU sends a PIO command. I didn't realize > you were getting so anal on this. it's only because you're incredibly sloppy, and some hapless reader might be confused by your stuff. "for instance, "sending a PIO command" makes it sound like the CPU is telling an IO device to PIO somthing. that's not the case. the CPU PIOs a small packet containing a command (a DMA command, by any normal terminology - telling the controller where to directly access main memory.) > I meant that the CPU is _not_ doing PIO for the data transfer. When you > do IDE/EIDE, as well as the ATA PIO modes 0-5, you _are_ doing PIO for > the data transfer. Only when you are doing ATA DMA is it not. "only"? PIO went out 10+ years ago. > When you use software RAID-3/4/5/6 (anything with an XOR), it very much > _is_ doing PIO because _every_ single byte goes through the CPU. you can abuse colloquial usage this way if you want, but you're speaking your own language. when the CPU computes XOR on a block, it's not called PIO, since, at the very least, it's not IO. I suppose you'd also call memcpy a PIO operation. > > just two "hey..." commands. the source data (in host ram) > > is not replicated. > > But *2x* the load on your interconnect. Can you see me on that one? sure, but so what? the interconnect is not the bottleneck. > > no, not really. > > Not really? 100% of the data is slammed from memory into the CPU -- > every single byte is worked on by the CPU _before_ the parity can be > calculated. That is Programmed I/O, I don't know how much more it could > be PIO than that! it's a block memory operation, no different from memset or memcpy. and, (HERE'S THE POINT), it's as fast, namely saturating memory at ~6 GB/s. > That's why I said consider software RAID XORs to be PIO! ;-> OK, you have your own language. > > well, the default would be 4x64k (and would consume <100 us > > assuming no cache hits.) > > In any case, you're _not_ slamming 6.4GBps through that CPU for > XORs. ;-> If you think so, you obviously don't know the first thing > how general purpose, superscalar microprocessors work. don't be a jerk. if you don't think a basic OTC opteron can xor a block at around 6 GB/s, prove me wrong. for an easier task, show how stream can do c[i] = a[i] + k * b[i] at 6 GB/s and you think c[i] = a[i] ^ b[i] is somehow harder or dramatically slower. > Thanx for being honest and forthcoming in that regard. ;-> you only show yourself as a jerk when you pretend that your partner in dialog is being deceptive. > Once again, you're _not_ going to be able to even remotely slam 6.4GBps > for XORs through that CPU. ;-> I said 6.0, actually, but you are clearly wrong. not so much wrong, just haven't noticed that a commodity opteron system (and even some intel-based systems) are dramatically better than your old PIII. > > I think that was garbled - using HW raid saves 1/(n-1) or 20% > > of the bus bandwidth. which is not a scarce commodity any more. > > You just totally ignored my entire discussion on what you have to push > through the CPU! A stream of data through the ALU of the CPU -- > something _not_ designed with an ASIC peripheral that does that one > thing and does it well! Something that you _would_ have on a hardware > RAID controller. I addressed it precisely. a commodity processor can do the parity calculation at ~6 GB/s, therefore it's a nonissue. similarly, the extra bandwidth consumed by SW raid is also a nonissue. this prevalence of nonissues is why SW raid is so very attractive on fileservers where the CPU would sit idle if you offload all the work to a HW raid card. > Even at it's "measly" 300-400MHz would do a heck of a lot more > efficiently, without tying up the system. it's a frigging fileserver. one that has 6-12 GB/s memory bandwidth. > > for the xor. which takes negligable time, even assuming no cache hits. > > The XOR -- FETCH/DECODE/EXECUTE -- yes, to a point. > The LOAD/STORE? I'm sorry, I think not. why can't you see that it's common for machines to have 6 GB/s available these days? > They can easily best a general microprocessor 10:1 or better, MHz for > MHz. That's why Intel has several lines of XScale processors, not just > one "general" one. if the specialized board adds 20% to the system cost, but winds up slower, what's the point? > > for a normal dual-opteron server, sure it can. r5 parity is actually > > easier than Stream's triad, which a single opteron can run at >4.5 GB/s. > > stream's reporting ignores write-allocate traffic, but MD does its > > writes through the cache, therefore can hit 6 GB/s. > > Assuming it's in the L1 cache, _maybe_. At 2.6GHz with a 3-issue ALU > and that the XOR operations can be (effectively) completed once per > clock cycle in a SIMD operation, that's wishful thinking. google for the stream benchmark, check my numbers. it doesn't even take a 2.6 GB/s cpu to drive 6 GB/s through today's memory systems. From hahn at physics.mcmaster.ca Sat Dec 10 23:02:37 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sat, 10 Dec 2005 18:02:37 -0500 (EST) Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: <1134219108.5853.113.camel@bert64.oviedo.smithconcepts.com> Message-ID: > > myri 10G does. > > I'm sorry, I meant 802.3 layer-2, not layer-2 in general (which is > abstract). Myri does 803.3ae standard 10G ethernet. From b.j.smith at ieee.org Sun Dec 11 01:15:29 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 10 Dec 2005 20:15:29 -0500 Subject: Opteron Vs. Athlon X2 In-Reply-To: References: Message-ID: <1134263729.5853.137.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-10 at 17:59 -0500, Mark Hahn wrote: > "only"? PIO went out 10+ years ago. PIO was still in widespread use for until the last 5 years. Heck, a number of ATAPI devices still use it. Only ATA's DMA modes are not PIO. ESDI, IDE and EIDE are PIO. > you can abuse colloquial usage this way if you want, but you're > speaking your own language. when the CPU computes XOR on a block, > it's not called PIO, since, at the very least, it's not IO. I suppose > you'd also call memcpy a PIO operation. Depends. Understand that I'm not abusing any terminology. > sure, but so what? the interconnect is not the bottleneck. Depends on how much the delay is before it gets to the bottleneck. ;-> > it's a block memory operation, no different from memset or memcpy. and, > (HERE'S THE POINT), it's as fast, namely saturating memory at ~6 GB/s. Ahhh, no. You really need to do some research on how microprocessors work, and what is involved when it comes to pushing something from DRAM into cache and up through the instruction cycle, including the all- important LOAD and STOR operations, or maybe the latencies involved in a SIMD operation. > OK, you have your own language. That's why I use the term "consider" -- because it's very equivalent. > don't be a jerk. if you don't think a basic OTC opteron can xor > a block at around 6 GB/s, prove me wrong. I'm not a jerk, I'm someone who doesn't have a GHz/GBps whore view of a microprocessor. I know I can't push 6.4GBps through the Opteron. There is a reason we use ASIC hardware instead of host software to do MAC layer filtering in Ethernet hardware. Exact same concept when it comes to storage switching too. Now at this point, there is no sense in bothering to even attempt to explain this further. Sometimes I really hate that I have experience in semiconductor design, because my views get repeatedly dismissed by IT professionals. E.g., another common one is the the folly of writing optimized code in assembler on modern, superscalar microprocessors. You will _not_ push 6.4GBps through the Opteron -- not even at doing a simple XOR operation. However, you _can_ get 1+GBps out of a common storage IOP, or specialized HBA for TCP/IP because of the way ASIC peripherals are designed on such specialized microcontrollers. End of thread. You can now assume I'm full of crap, I really don't care. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Sun Dec 11 01:19:50 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 10 Dec 2005 20:19:50 -0500 Subject: Microcontrollers/ASICs v. General Purpose Microprocessors -- WAS: Opteron Vs. Athlon X2 In-Reply-To: References: Message-ID: <1134263990.5853.141.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-10 at 18:02 -0500, Mark Hahn wrote: > Myri does 803.3ae standard 10G ethernet. Sigh. Myrinet can _emulate_ 802.3 Ethernet, with all its associated overhead. But Myrinet has its own, native frame protocol as well. Neither are as efficient as a local HyperTransport mesh in a PCB. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Sun Dec 11 02:42:01 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 10 Dec 2005 21:42:01 -0500 Subject: bonnie++ Sun Fire X2100 on Solaris 10 and CentOS -- 5-year old system (my bonnie++) In-Reply-To: <20051210173917.GQ2249@leitl.org> References: <20051210173917.GQ2249@leitl.org> Message-ID: <1134268921.5853.197.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-10 at 18:39 +0100, Eugen Leitl wrote: > Here's a bonnie++ benchmark ... Run how? Did you run it on the local system? Over a NFS mount from 1 client? Or over a NFS mounts from hundreds of clients? If you not seeing a repeat theme here, it's that I've continually put forth the point that once you start killing your system with other traffic from numerous systems, you need to reduce all the CPU- interconnection contention you can. ;-> A local bonnie benchmark nets you little. But I will see your call with a 5 year old system with 5 year old hard drives and an almost 5 year old 3Ware Escalade 7800. ;-> P3 850MHz with 512MiB Reg ECC PC100, ServerWorks IIILE chipset, 3Ware Escalade 7800 (64-bit at 33MHz = only a measly 0.25GBps) with a 6-disc RAID-10 on _old_ 5400rpm Ultra66 80GB (20GB/platter) Maxtor disks: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP dilbert.ovied 3000M 15090 97 38337 33 18800 12 16303 87 56843 16 96.0 0 Now look at those benchmarks. First off, the "Per Chr" the performance is _dismal_ because of the P3 architecture and interconnect. As you can regularly see, the CPU utilization is pegged near max, meaning even this almost 5 year old 3Ware Escalade 7800 card would be far more capable on the Opteron! Secondly, now look at the block operations. Not bad CPU utilization rates on the block modes in comparison to yours -- a system over 4 years newer, with almost an order of magnitude greater memory and CPU interconnect. Now here's the kicker ... My more "real world" rewrite block data transfer rate (DTR) is basically 3 times yours!!! How is that? My 3Ware Escalade is a commanding/ queuing storage controller. Your nVidia MCP-04 SATA is not even NCQ, so it's "dumb" ATA block I/O. But even NCQ might not be enough -- especially _not_ when it comes to software RAID, because NCQ is for individual disks, not the cohesive volume. You could chalk it up to the fact that I have 6 discs in RAID-10, so the stripe is effectively 3x (3 discs, then mirrored totally 6) as many. But these are disks almost 5 years old! Over 1/6th the density of your drives! It's the command queuing of the card. It's that damn 64-bit ASIC on the 3Ware that is just off-loading so much from the CPU -- look, only 12% CPU utilization! I know you only had 2%, but this is an _old_ P3 850MHz! And I'm rewriting 3x as much as you are! Now I could reconfigure the server with newer Seagate 7200.8 200GB disks in RAID-10. I have 6 right here -- and I'm planning to put in the system soon. Until then, these are with almost 5 year old 5400rpm, Ultra66 80GB (20GB/platter) Maxtor drives. In reality, I'm getting only 15MBps/disc, which sounds about right for the technology period of these disks. -- Bryan P.S. Of course, I'd love to show off an 8-disc RAID-10 volume on a 3Ware Escalade 8506-8 or 12 in an Opteron platform that I typically deploy for clients. But I don't have one in my house, nor at my current place of work (as of 2 months ago). I now work for someone else, and it's all small-form factor embedded work at a small company. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From b.j.smith at ieee.org Sun Dec 11 02:55:32 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Sat, 10 Dec 2005 21:55:32 -0500 Subject: bonnie++ Sun Fire X2100 on Solaris 10 and CentOS -- 5-year old system (my bonnie++) In-Reply-To: <1134268921.5853.197.camel@bert64.oviedo.smithconcepts.com> References: <20051210173917.GQ2249@leitl.org> <1134268921.5853.197.camel@bert64.oviedo.smithconcepts.com> Message-ID: <1134269732.5853.208.camel@bert64.oviedo.smithconcepts.com> On Sat, 2005-12-10 at 21:42 -0500, Bryan J. Smith wrote: > P3 850MHz with 512MiB Reg ECC PC100, ServerWorks IIILE chipset, 3Ware > Escalade 7800 (64-bit at 33MHz = only a measly 0.25GBps) with a 6-disc > RAID-10 on _old_ 5400rpm Ultra66 80GB (20GB/platter) Maxtor disks: > Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > dilbert.ovied 3000M 15090 97 38337 33 18800 12 16303 87 56843 16 96.0 0 FYI, here is a colleague with the exact same P3/chipset/memory configuration (850/IIILE/512MB-RegECCPC100), only: 1) He has a 3Ware Escalade 7450 (2MB SRAM, my 7800 only has 1MB SRAM) 2) He has 4x 120GB WD 7200rpm disks (40GB/platter) 3) They are in a RAID-5 (not ideal for the 7000/8000 series) 4) Accessing over the GbE _network_ from one NFSv4 client [ NOTE: These are "bonnie" not "bonnie++" ] Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP nebula 3G:64k 55041 38 17120 15 49181 19 108.6 3 [ He skipped the "Per Chr" operations, just ran the block tests ] Once again, not only sequential 50MBps read/write on an old server, on a card with virtually no buffering for RAID-5 (only 2MB SRAM, built more for RAID-10) and platters not even 1/3rd the density (and DTR) of your drives, but still 3x your rewrite speed! 3Ware 7000/8000 are _not_ ideal for RAID-5, far better at RAID-10. I also have a feeling that his sequential input (read) is more of a fact of NFS. I regularly achieve >100MBps sequential reads with 60-80Gb/platter disk densities on even these old P3 systems, as long as the card is in a 64-bit slot (all but the 7506 series in the 7000 run at 33MHz, not 66MHz). Such should be the case in RAID-5 reads, because it's just like RAID-0 (minus one stripe). -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From hahn at physics.mcmaster.ca Sun Dec 11 05:37:45 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Sun, 11 Dec 2005 00:37:45 -0500 (EST) Subject: Opteron Vs. Athlon X2 In-Reply-To: <1134263729.5853.137.camel@bert64.oviedo.smithconcepts.com> Message-ID: > microprocessor. I know I can't push 6.4GBps through the Opteron. http://www.cs.virginia.edu/stream/stream_mail/2004/0013.html here is independent proof that you can do the stream triad at 4555 MB/s, which after accounting for write-allocate traffic, is 6 GB/s, the number I stated. yes, it is fair to count WA traffic because the XOR checksum can do movntq (through the cache). From eugen at leitl.org Sun Dec 11 09:43:32 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 11 Dec 2005 10:43:32 +0100 Subject: bonnie++ Sun Fire X2100 on Solaris 10 and CentOS -- 5-year old system (my bonnie++) In-Reply-To: <1134268921.5853.197.camel@bert64.oviedo.smithconcepts.com> References: <20051210173917.GQ2249@leitl.org> <1134268921.5853.197.camel@bert64.oviedo.smithconcepts.com> Message-ID: <20051211094332.GH2249@leitl.org> On Sat, Dec 10, 2005 at 09:42:01PM -0500, Bryan J. Smith wrote: > On Sat, 2005-12-10 at 18:39 +0100, Eugen Leitl wrote: > > Here's a bonnie++ benchmark ... > > Run how? Did you run it on the local system? Over a NFS mount from 1 > client? Or over a NFS mounts from hundreds of clients? I posted these numbers (run on the local filesystem of course, since not stated otherwise) because someone might be looking for data for that particular machine, running under different operating systems. > If you not seeing a repeat theme here, it's that I've continually put > forth the point that once you start killing your system with other > traffic from numerous systems, you need to reduce all the CPU- > interconnection contention you can. ;-> Remember, this is designated as a vserver box, attached to the world to a single 100 MBit Ethernet link, which is shared with other machines. It also has a second private GBit Ethernet network, just because the switch was free. > A local bonnie benchmark nets you little. But I will see your call with > a 5 year old system with 5 year old hard drives and an almost 5 year old > 3Ware Escalade 7800. ;-> This is not a competition. I have a particular $$$ budget in 1U and 4 GBytes ECC and energy footprint, and IPMI remote management. This system is good for 400 VServers, and the storage is more than adequate for that. > First off, the "Per Chr" the performance is _dismal_ because of the P3 > architecture and interconnect. As you can regularly see, the CPU > utilization is pegged near max, meaning even this almost 5 year old > 3Ware Escalade 7800 card would be far more capable on the Opteron! I only have one PCI-X slot, and only two drive cages. I wish there was space for four drives, but it would interfere with airflow (the system is a veritable leafblower as it is). Also, more drives would mean more kWh. Rack kWh are expensive. > Secondly, now look at the block operations. Not bad CPU utilization > rates on the block modes in comparison to yours -- a system over 4 years It's a VServer system. The I/O will be largely idle. > newer, with almost an order of magnitude greater memory and CPU > interconnect. Now here's the kicker ... > > My more "real world" rewrite block data transfer rate (DTR) is basically I don't have any rewrites. It's largely a read-only environment, with few VServers active at the same time. > You could chalk it up to the fact that I have 6 discs in RAID-10, so the I don't have space for 6 discs. I don't have the $$$ budget for 6 discs. I don't have the energy budget for 6 disks. > stripe is effectively 3x (3 discs, then mirrored totally 6) as many. > But these are disks almost 5 years old! Over 1/6th the density of your 250 GBytes RAID 1 is scraping the barrel for my requirements. I would have gone for 500 GByte drives -- but I don't have the budget for 500 GByte drives. This is still a hobby. > drives! It's the command queuing of the card. It's that damn 64-bit > ASIC on the 3Ware that is just off-loading so much from the CPU -- look, > only 12% CPU utilization! I know you only had 2%, but this is an _old_ > P3 850MHz! And I'm rewriting 3x as much as you are! If I had to build a database server, I would have chosen a different machine. > Now I could reconfigure the server with newer Seagate 7200.8 200GB disks > in RAID-10. I have 6 right here -- and I'm planning to put in the > system soon. Until then, these are with almost 5 year old 5400rpm, > Ultra66 80GB (20GB/platter) Maxtor drives. In reality, I'm getting only > 15MBps/disc, which sounds about right for the technology period of these > disks. > > -- Bryan > > P.S. Of course, I'd love to show off an 8-disc RAID-10 volume on a > 3Ware Escalade 8506-8 or 12 in an Opteron platform that I typically > deploy for clients. But I don't have one in my house, nor at my current > place of work (as of 2 months ago). I now work for someone else, and > it's all small-form factor embedded work at a small company. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From cochranb at speakeasy.net Tue Dec 13 23:16:01 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Tue, 13 Dec 2005 18:16:01 -0500 Subject: Fedora Core 4 For Dual Cores? Message-ID: <439F5631.4020003@speakeasy.net> I'm planning on ordering an AMD Athlon 64 X2 4400 with an Asrock 939Dual-SATA2 board pretty soon. Will Fedora Core 4 x86_64 be able to recognize and take advantage of both cpu cores? Is there some sort of test I can run to see if Fedora detects the 2 cores? Thanks Bob Cochran From cochranb at speakeasy.net Tue Dec 13 23:20:03 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Tue, 13 Dec 2005 18:20:03 -0500 Subject: Athlon 64, Memory Pricing Message-ID: <439F5723.8070805@speakeasy.net> I hope this isn't too stupid a question, but are AMD processors (and memory sticks, come to think of it) cheaper to buy after Christmas? Or will pricing remain the same? I assume that among you are experienced cpu snf memory buyers, and I'm always open to practical buying advice. Thanks Bob Cochran From davej at redhat.com Tue Dec 13 23:21:52 2005 From: davej at redhat.com (Dave Jones) Date: Tue, 13 Dec 2005 18:21:52 -0500 Subject: Fedora Core 4 For Dual Cores? In-Reply-To: <439F5631.4020003@speakeasy.net> References: <439F5631.4020003@speakeasy.net> Message-ID: <20051213232152.GB14190@redhat.com> On Tue, Dec 13, 2005 at 06:16:01PM -0500, Robert L Cochran wrote: > I'm planning on ordering an AMD Athlon 64 X2 4400 with an Asrock > 939Dual-SATA2 board pretty soon. Will Fedora Core 4 x86_64 be able to > recognize and take advantage of both cpu cores? Is there some sort of > test I can run to see if Fedora detects the 2 cores? It should work. Though some folks are hitting a kernel bug right now which hopefully I'll have worked out in the next update, which prevents the kernel from seeing all the cores. If it works, you'll see them show up in /proc/cpuinfo as if its a second processor. Dave From lamont at gurulabs.com Tue Dec 13 23:22:25 2005 From: lamont at gurulabs.com (Lamont R. Peterson) Date: Tue, 13 Dec 2005 16:22:25 -0700 Subject: Fedora Core 4 For Dual Cores? In-Reply-To: <439F5631.4020003@speakeasy.net> References: <439F5631.4020003@speakeasy.net> Message-ID: <200512131622.29525.lamont@gurulabs.com> On Tuesday 13 December 2005 04:16pm, Robert L Cochran wrote: > I'm planning on ordering an AMD Athlon 64 X2 4400 with an Asrock > 939Dual-SATA2 board pretty soon. Will Fedora Core 4 x86_64 be able to > recognize and take advantage of both cpu cores? Yes...by running an SMP (Symmetrical MultiProcessing kernel. > Is there some sort of > test I can run to see if Fedora detects the 2 cores? Install it. During installation, anaconda (the install program) will detect that there are multiple processors available and automatically select and install the correct SMP kernel, also making that kernel the default at boot time (unless you tweak the bootloader configuration, of course). -- Lamont R. Peterson Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ] -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From loony at loonybin.org Tue Dec 13 23:25:15 2005 From: loony at loonybin.org (Peter Arremann) Date: Tue, 13 Dec 2005 18:25:15 -0500 Subject: Fedora Core 4 For Dual Cores? In-Reply-To: <439F5631.4020003@speakeasy.net> References: <439F5631.4020003@speakeasy.net> Message-ID: <200512131825.16016.loony@loonybin.org> On Tuesday 13 December 2005 18:16, Robert L Cochran wrote: > I'm planning on ordering an AMD Athlon 64 X2 4400 with an Asrock > 939Dual-SATA2 board pretty soon. Will Fedora Core 4 x86_64 be able to > recognize and take advantage of both cpu cores? Is there some sort of > test I can run to see if Fedora detects the 2 cores? I have exactly that same setup. Just cat /proc/cpuinfo and you'll see two cpus there. Peter. From loony at loonybin.org Tue Dec 13 23:37:36 2005 From: loony at loonybin.org (Peter Arremann) Date: Tue, 13 Dec 2005 18:37:36 -0500 Subject: Athlon 64, Memory Pricing In-Reply-To: <439F5723.8070805@speakeasy.net> References: <439F5723.8070805@speakeasy.net> Message-ID: <200512131837.36987.loony@loonybin.org> On Tuesday 13 December 2005 18:20, you wrote: > I hope this isn't too stupid a question, but are AMD processors (and > memory sticks, come to think of it) cheaper to buy after Christmas? Or > will pricing remain the same? I assume that among you are experienced > cpu snf memory buyers, and I'm always open to practical buying advice. AMD lowers their prices usually only when they launch new chips. January will most likely see the FX-60 and 64-5000+ launch. All existing models will be adjusted down. But you're looking at the second half of January with possible delays till mid february. Memory isn't gonna change much either. DDR2 prices will go down a little, DDR1 should pretty much stay the same for quality modules. Low grade no name DDR1 modules will probably even go up a little. In all, I don't think waiting will save you any reasonable amount of money... Peter. From b.j.smith at ieee.org Wed Dec 14 01:15:24 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 13 Dec 2005 17:15:24 -0800 (PST) Subject: Athlon 64, Memory Pricing In-Reply-To: <439F5723.8070805@speakeasy.net> Message-ID: <20051214011524.57781.qmail@web34109.mail.mud.yahoo.com> Robert L Cochran wrote: > I hope this isn't too stupid a question, but are AMD > processors (and memory sticks, come to think of it) cheaper > to buy after Christmas? Except for at initial product launch (when scarcity drives massive/unethical spikes), immediate demand has very little to do with semiconductor cost. Semiconductor cost is largely a factor of more longer-term trends, especially considering the cost of fabs (billions USD) and sometimes it's cheaper to keep a fab going than to shut it down (long, long story/discussion). > Or will pricing remain the same? I assume that among you are > experienced cpu snf memory buyers, and I'm always open to > practical buying advice. New CPU products drive down other prices. Same deal with GPU. Memory prices fluctuate far more radically. Usually based on raw material supply more than anything. Although fab capacity and longer-term demand are also factors. But virtually _nothing_ is due to short-term demand With the exception of initial product launch. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From eugen at leitl.org Wed Dec 14 07:39:20 2005 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 14 Dec 2005 08:39:20 +0100 Subject: Athlon 64, Memory Pricing In-Reply-To: <200512131837.36987.loony@loonybin.org> References: <439F5723.8070805@speakeasy.net> <200512131837.36987.loony@loonybin.org> Message-ID: <20051214073920.GW2249@leitl.org> On Tue, Dec 13, 2005 at 06:37:36PM -0500, Peter Arremann wrote: > Memory isn't gonna change much either. DDR2 prices will go down a little, DDR1 > should pretty much stay the same for quality modules. Low grade no name DDR1 > modules will probably even go up a little. Does anyone know when AMD64 with DDR2 interface is going to land? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From c00jsh00 at nchc.org.tw Wed Dec 14 07:47:47 2005 From: c00jsh00 at nchc.org.tw (Jyh-Shyong Ho) Date: Wed, 14 Dec 2005 15:47:47 +0800 Subject: Athlon 64, Memory Pricing In-Reply-To: <20051214073920.GW2249@leitl.org> References: <439F5723.8070805@speakeasy.net> <200512131837.36987.loony@loonybin.org> <20051214073920.GW2249@leitl.org> Message-ID: <439FCE23.3090108@nchc.org.tw> Eugen Leitl wrote: >On Tue, Dec 13, 2005 at 06:37:36PM -0500, Peter Arremann wrote: > > > >>Memory isn't gonna change much either. DDR2 prices will go down a little, DDR1 >>should pretty much stay the same for quality modules. Low grade no name DDR1 >>modules will probably even go up a little. >> >> > >Does anyone know when AMD64 with DDR2 interface is going to land? > > > My impression of the relase date of DDR2 support of AMD64/Opteron is Q3 2006. Jyh-Shyong Ho -------------- next part -------------- An HTML attachment was scrubbed... URL: From loony at loonybin.org Wed Dec 14 13:45:30 2005 From: loony at loonybin.org (Peter Arremann) Date: Wed, 14 Dec 2005 08:45:30 -0500 Subject: Athlon 64, Memory Pricing In-Reply-To: <20051214073920.GW2249@leitl.org> References: <439F5723.8070805@speakeasy.net> <200512131837.36987.loony@loonybin.org> <20051214073920.GW2249@leitl.org> Message-ID: <200512140845.30990.loony@loonybin.org> On Wednesday 14 December 2005 02:39, Eugen Leitl wrote: > On Tue, Dec 13, 2005 at 06:37:36PM -0500, Peter Arremann wrote: > > Memory isn't gonna change much either. DDR2 prices will go down a little, > > DDR1 should pretty much stay the same for quality modules. Low grade no > > name DDR1 modules will probably even go up a little. > > Does anyone know when AMD64 with DDR2 interface is going to land? The answer is all just rumors, so please take it as such. Most people guess for Q2 2006 but recent rumors place it even earlier at the end of Q1. Basically, it seems that the start date is not going to be determined by technology but AMD could do the launch tomorrow if they wanted. There are enough chipsets out there that have been validated for M2 setups (no surprise, after all they are attached to the cpu with hypertransport and have very little to do with memory access). CPUs seem to be available too and mainboards from various manufacturers have been on display for a while. There are 3 popular opinions on why AMD would delay the M2... First they are trying to have only 2 platforms. When they introduce M2, they want to dump 754 and have 939 for the low end and M2 for the high end. That switch takes time and they don't want to discontinue 754 while there is still a lot of money made with that design. The second opinion is that AMD is waiting for the next big Intel anouncement and use the M2 to counter that. The third rumor is that Dell has finally decided to use AMD chips and they want to be the first to sell those new high-end chips. So AMD is simply waiting for Dell to finalize its design and then they will announce that. Wish that was true - but there have been so many Dell going AMD rumors in the past that I'm not holding my breath... Q2 seems like a good bet - as for the why, I doubt we'll ever know for sure. Peter. From loony at loonybin.org Thu Dec 15 00:36:43 2005 From: loony at loonybin.org (Peter Arremann) Date: Wed, 14 Dec 2005 19:36:43 -0500 Subject: Athlon 64, Memory Pricing In-Reply-To: <200512140845.30990.loony@loonybin.org> References: <439F5723.8070805@speakeasy.net> <20051214073920.GW2249@leitl.org> <200512140845.30990.loony@loonybin.org> Message-ID: <200512141936.43484.loony@loonybin.org> On Wednesday 14 December 2005 02:39, Eugen Leitl wrote: > Does anyone know when AMD64 with DDR2 interface is going to land? Hehe - guess my long spiel was pointless :-) April 6th is the official date. http://www.xbitlabs.com/news/cpu/display/20051214062233.html Peter. From yusufg at outblaze.com Thu Dec 15 02:30:36 2005 From: yusufg at outblaze.com (Yusuf Goolamabbas) Date: Thu, 15 Dec 2005 10:30:36 +0800 Subject: process segfault messages on AMD64 Message-ID: <20051215023036.GC19843@outblaze.com> Hi, On a CentOS 4.2 box (AMD64 3000/Socket 754). I was trying to run Pound ver 2.0b4. Whenever I connect to Pound, it segfaults leaving these messages in /var/log/messages Dec 14 19:50:37 124 pound: MONITOR: worker exited on signal 11, restarting... Dec 14 19:51:01 124 kernel: pound[28106]: segfault at 00000000000000d0 rip 0000003feaeacd9f rsp 00000000400bc700 error 6 I will try to see if I can get gdb to help but my experiences with gdb on multi-threaded apps haven't been so good. In any case, does anybody know what 'error 6' mean from the kernel message ? Regards, Yusuf From cochranb at speakeasy.net Wed Dec 28 03:56:59 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Tue, 27 Dec 2005 22:56:59 -0500 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness Message-ID: <43B20D0B.1030808@speakeasy.net> This is a followup to my earlier emails in which I've expressed interest in AMD Athlon 64 X2 dual core processors and appropriate motherboards for same. I'm the pleased owner of an X2 4400 now, it is installed on an Asrock 939Dual-SATA2 motherboard. I'm puzzled about 2 issues with this motherboard. #1 -- I also installed a Plextor PX-716SA DVD-RW drive. This is an SATA drive and I can't boot off it with a bootable DVD. The drive is installed on the motherboard's SATA2 connector (this is a plain SATA connector, item #13 on page 2 of the manual. It is not item #10, "Serial SATAII connector", the red one.) There is also a PATA Sony DVD-RW drive plugged into the secondary IDE connector, motherboard item #16. I can boot from this, fortunately. Each time I press the power button, I get a BIOS POST message telling me to reboot and select a new boot drive or press a key to continue. Pressing any one key causes the message to repeat. Rebooting with CTRL-ALT-DEL will then bring up the Grub splash screen, and the system boots. Any comments about this? #2 -- I'm interpreting the PLED pins of the System Panel Header (item #17 on page 2 of the manual) to mean Power LED. On my Lian Li case, the Power LED wires are 2 wires in a 3-pin connector. The middle connector pin is empty. Yet, the PLED pins on the motherboard header are for a 2-pin connector. But wait! Just to the left of it is the Chassis Speaker Header, item #18, and if you look closely, there is text saying something like "PWRLED" or some such and there are 3 wave-soldered holes on the motherboard right next to the Chassis Speaker Header. I'm confused about where to plug in the Power LED connector. Any comments? One final note. If you get this motherboard for yourself and install Fedora Core 4 on it, be sure to have the latest glibc and kernel-smp rpm's and install them right after the Fedora install, so that you will have networking available from the onboard NIC. The system seems very nice -- I'm using it now, restoring backed up data to the new drive. The 4400 processor seems nice and speedy. Thanks Bob Cochran Greenbelt, Maryland, USA From b.j.smith at ieee.org Wed Dec 28 04:13:59 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 27 Dec 2005 23:13:59 -0500 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness -- nVidia PCI IDs on C51 (GF61x0+nF4x0) In-Reply-To: <43B20D0B.1030808@speakeasy.net> References: <43B20D0B.1030808@speakeasy.net> Message-ID: <1135743239.4712.29.camel@bert64.oviedo.smithconcepts.com> On Tue, 2005-12-27 at 22:56 -0500, Robert L Cochran wrote: > #1 -- I also installed a Plextor PX-716SA DVD-RW drive. This is an SATA > drive and I can't boot off it with a bootable DVD. The drive is > installed on the motherboard's SATA2 connector (this is a plain SATA > connector, item #13 on page 2 of the manual. It is not item #10, "Serial > SATAII connector", the red one.) In Windows, that drive is _only_ guaranteed to work on the SATA channels Intel ICH6 or ICH7 southbridge chips. The reason is simple, there is a _huge_ augmentation of the ATA register/support to support ATA Peripheral Interface (ATAPI) devices such as optical drives. In other words, do _not_ buy SATA ATAPI devices today. The overwhelming majority of SATA controllers can't support ATAPI, and there are some missing industry standards for doing such. In fact, the Plextor is using a simple ATA-to-SATA converter inside the PX-716A, so the driver is -- in fact, just an standard ATAPI drive. > #2 -- I'm interpreting the PLED pins of the System Panel Header (item > #17 on page 2 of the manual) to mean Power LED. On my Lian Li case, the > Power LED wires are 2 wires in a 3-pin connector. The middle connector > pin is empty. Yet, the PLED pins on the motherboard header are for a > 2-pin connector. Move the appropriate GND or +5VDC to the appropriate pin. I do that all-the-time. > But wait! Just to the left of it is the Chassis Speaker Header, item > #18, and if you look closely, there is text saying something like > "PWRLED" or some such and there are 3 wave-soldered holes on the > motherboard right next to the Chassis Speaker Header. > I'm confused about where to plug in the Power LED connector. > Any comments? Some mainboards offer both a 2-pin and a 3-pin LED, yes. So either will probably do. When only a 2-pin is offered, I pull out the appropriate GND or +5VDC and move it to the middle to fit. > One final note. If you get this motherboard for yourself and install > Fedora Core 4 on it, be sure to have the latest glibc and kernel-smp > rpm's and install them right after the Fedora install, so that you will > have networking available from the onboard NIC. Yes, it's clear nVidia has changed the PCI IDs on the GeForce 61x0 + nForce 4x0 ... see my blog entry here ... http://thebs413.blogspot.com/2005/12/linux-on-nvidia-c51nv44-nforce.html So it is _not_ the same as the other chips in nForce4/Pro series. > The system seems very nice -- I'm using it now, restoring backed up data > to the new drive. The 4400 processor seems nice and speedy. You're going to see improved response time because of how Linux handles MP scheduling. The Linux kernel prefers to sacrifice response time for throughput, by minimizing re-entry and the associated context switching. On MP, you can now have 2 re-entries without any additional context switching -- and the response time on a desktop or workstation is noticable from the user-level. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From cochranb at speakeasy.net Wed Dec 28 04:46:30 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Tue, 27 Dec 2005 23:46:30 -0500 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness -- nVidia PCI IDs on C51 (GF61x0+nF4x0) In-Reply-To: <1135743239.4712.29.camel@bert64.oviedo.smithconcepts.com> References: <43B20D0B.1030808@speakeasy.net> <1135743239.4712.29.camel@bert64.oviedo.smithconcepts.com> Message-ID: <43B218A6.70705@speakeasy.net> Thanks Bryan. I guess I shouldn't have bought the Plextor. I've been wanting faster burn times for a long while now. As regards the power LED, the case connector is a unitized 3-pin plastic connector meant for a 3 pin header. I'll have to reseat the +5v and GND pins in a new 2-pin connector for this board, right? I guess I'm not about to de-install the motherboard just so I can solder an alternate 3-pin header on it. Bob Bryan J. Smith wrote: >On Tue, 2005-12-27 at 22:56 -0500, Robert L Cochran wrote: > > >>#1 -- I also installed a Plextor PX-716SA DVD-RW drive. This is an SATA >>drive and I can't boot off it with a bootable DVD. The drive is >>installed on the motherboard's SATA2 connector (this is a plain SATA >>connector, item #13 on page 2 of the manual. It is not item #10, "Serial >>SATAII connector", the red one.) >> >> > >In Windows, that drive is _only_ guaranteed to work on the SATA channels >Intel ICH6 or ICH7 southbridge chips. The reason is simple, there is a >_huge_ augmentation of the ATA register/support to support ATA >Peripheral Interface (ATAPI) devices such as optical drives. > >In other words, do _not_ buy SATA ATAPI devices today. The overwhelming >majority of SATA controllers can't support ATAPI, and there are some >missing industry standards for doing such. In fact, the Plextor is >using a simple ATA-to-SATA converter inside the PX-716A, so the driver >is -- in fact, just an standard ATAPI drive. > > > >>#2 -- I'm interpreting the PLED pins of the System Panel Header (item >>#17 on page 2 of the manual) to mean Power LED. On my Lian Li case, the >>Power LED wires are 2 wires in a 3-pin connector. The middle connector >>pin is empty. Yet, the PLED pins on the motherboard header are for a >>2-pin connector. >> >> > >Move the appropriate GND or +5VDC to the appropriate pin. I do that >all-the-time. > > > >>But wait! Just to the left of it is the Chassis Speaker Header, item >>#18, and if you look closely, there is text saying something like >>"PWRLED" or some such and there are 3 wave-soldered holes on the >>motherboard right next to the Chassis Speaker Header. >>I'm confused about where to plug in the Power LED connector. >>Any comments? >> >> > >Some mainboards offer both a 2-pin and a 3-pin LED, yes. >So either will probably do. > >When only a 2-pin is offered, I pull out the appropriate GND or +5VDC >and move it to the middle to fit. > > > >>One final note. If you get this motherboard for yourself and install >>Fedora Core 4 on it, be sure to have the latest glibc and kernel-smp >>rpm's and install them right after the Fedora install, so that you will >>have networking available from the onboard NIC. >> >> > >Yes, it's clear nVidia has changed the PCI IDs on the GeForce 61x0 + >nForce 4x0 ... see my blog entry here ... >http://thebs413.blogspot.com/2005/12/linux-on-nvidia-c51nv44-nforce.html > >So it is _not_ the same as the other chips in nForce4/Pro series. > > > >>The system seems very nice -- I'm using it now, restoring backed up data >>to the new drive. The 4400 processor seems nice and speedy. >> >> > >You're going to see improved response time because of how Linux handles >MP scheduling. The Linux kernel prefers to sacrifice response time for >throughput, by minimizing re-entry and the associated context switching. >On MP, you can now have 2 re-entries without any additional context >switching -- and the response time on a desktop or workstation is >noticable from the user-level. > > > > From loony at loonybin.org Wed Dec 28 13:34:29 2005 From: loony at loonybin.org (Peter Arremann) Date: Wed, 28 Dec 2005 08:34:29 -0500 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness In-Reply-To: <43B20D0B.1030808@speakeasy.net> References: <43B20D0B.1030808@speakeasy.net> Message-ID: <200512280834.30123.loony@loonybin.org> On Tuesday 27 December 2005 22:56, Robert L Cochran wrote: > I'm confused about where to plug in the Power LED connector. Any comments? Most newer boards have both 2 and 3 pin power led connectors. You use the one that fits your case... Peter. From b.j.smith at ieee.org Wed Dec 28 16:12:45 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 28 Dec 2005 11:12:45 -0500 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness -- nVidia PCI IDs on C51 (GF61x0+nF4x0) In-Reply-To: <43B218A6.70705@speakeasy.net> References: <43B20D0B.1030808@speakeasy.net> <1135743239.4712.29.camel@bert64.oviedo.smithconcepts.com> <43B218A6.70705@speakeasy.net> Message-ID: <1135786365.4712.40.camel@bert64.oviedo.smithconcepts.com> On Tue, 2005-12-27 at 23:46 -0500, Robert L Cochran wrote: > Thanks Bryan. I guess I shouldn't have bought the Plextor. I've been > wanting faster burn times for a long while now. Considering the actual, electrical parallel ATAPI interface on the device probably runs at only UltraDMA mode 2 (Ultra33) or maybe UltraDMA mode 4 (Ultra66), the SATA only adds overhead. Until I see the SATA controllers -- and their OS drivers -- implemented with the full ATAPI command set/registers, you want to stand clear of SATA ATAPI devices like optical drives. > As regards the power LED, the case connector is a unitized 3-pin plastic > connector meant for a 3 pin header. I'll have to reseat the +5v and GND > pins in a new 2-pin connector for this board, right? I guess I'm not > about to de-install the motherboard just so I can solder an alternate > 3-pin header on it. I just use the same 3-pin connector regularly, except I move the appropriate pin to the middle. I have the exact same ASRock mainboard, only in the Socket-754 flavor, and that's what I did. I've also noted the front panel header is the same on all new ASRock, Foxconn Channel and Gigabyte Socket-754/939 desktop mainboards. Something like ... PWRLED PWRON o o o o o o o o o HDLED RESET The speaker is on its own, and a few mainboards then have a 3-pin PWRLED option separate. -- Bryan J. Smith mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------ Some things (or athletes) money can't buy. For everything else there's "ManningCard." From dsavage at peaknet.net Thu Dec 29 01:05:22 2005 From: dsavage at peaknet.net (Robert G. (Doc) Savage) Date: Wed, 28 Dec 2005 19:05:22 -0600 Subject: OT: K8WE floppy interface question Message-ID: <1135818322.2753.60.camel@lioness.protogeek.org> Would any owner of a Tyan K8WE motherboard please check the floppy interface connector and tell me if it has all its pins? With my motherboard oriented so that the CPU and DIMM sockets are farthest away from me, the pins in positions #2, #3, and #4 missing from the right-hand column? If yours are not, then the reason my floppy disk doesn't work (and I can't update my FlashBIOS) is a manufacturing defect and I have to pursue a warranty replacement. Otherwise... TIA -- Doc Robert G. (Doc) Savage, BSE(EE), CISSP, RHCE | Fairview Heights, IL Fedora Core 4 kernel 2.6.14-1.1653_FC4 on a P-III/M IBM Thinkpad A22p ** Bob Costas for Baseball Commissioner ** From maurice at harddata.com Thu Dec 29 17:10:51 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Thu, 29 Dec 2005 10:10:51 -0700 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness -- nVidia PCI IDs on C51 (GF61x0+nF4x0) In-Reply-To: <43B218A6.70705@speakeasy.net> References: <43B20D0B.1030808@speakeasy.net> <1135743239.4712.29.camel@bert64.oviedo.smithconcepts.com> <43B218A6.70705@speakeasy.net> Message-ID: <43B4189B.6000501@harddata.com> Robert L Cochran wrote: > Thanks Bryan. I guess I shouldn't have bought the Plextor. I've been > wanting faster burn times for a long while now. > I would disagree. It IS a good drive in all other aspects. However it is not a bootable device in the case of how you built your system. Fortunately there are other alternatives available. For people in your case I usually suggest installing a card reader device and using a CF card as a removable boot device. CF is ATA standard bootable, and virtually all modern motherboards support booting off such a device over USB interface. > As regards the power LED, the case connector is a unitized 3-pin > plastic connector meant for a 3 pin header. I'll have to reseat the > +5v and GND pins in a new 2-pin connector for this board, right? I > guess I'm not about to de-install the motherboard just so I can solder > an alternate 3-pin header on it. > No, just move one of the wires that is currently on the "outside" location of the 3 hole connector to the "center" position. A trivial task. Fix the cable, NOT the motherboard! Unfortunately there is no true standard yet, and, as Bryan pointed out, some boards and cases use a 3 pin header, and some use two. If you have the 3 pin header on the wires it is easy to move one wire in the housing to the center position. -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 From cochranb at speakeasy.net Fri Dec 30 13:38:48 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Fri, 30 Dec 2005 08:38:48 -0500 Subject: Asrock 939Dual-SATA2 Motherboard Strangeness -- nVidia PCI IDs on C51 (GF61x0+nF4x0) In-Reply-To: <43B4189B.6000501@harddata.com> References: <43B20D0B.1030808@speakeasy.net> <1135743239.4712.29.camel@bert64.oviedo.smithconcepts.com> <43B218A6.70705@speakeasy.net> <43B4189B.6000501@harddata.com> Message-ID: <43B53868.7040805@speakeasy.net> Maurice Hilarius wrote: >Robert L Cochran wrote: > > > >>Thanks Bryan. I guess I shouldn't have bought the Plextor. I've been >>wanting faster burn times for a long while now. >> >> >> >I would disagree. It IS a good drive in all other aspects. >However it is not a bootable device in the case of how you built your >system. >Fortunately there are other alternatives available. For people in your >case I usually suggest installing a card reader device and using a CF >card as a removable boot device. >CF is ATA standard bootable, and virtually all modern motherboards >support booting off such a device over USB interface. > > Thank you. Plextor may be a good drive brand, but this drive doesn't work in my system. Fedora Core 4 can't recognize either a DVD or CD put in it. I can't boot off a DVD or CD either. The older DVD drive I had wanted to retire does work -- it has an IDE interface. I think at the very least the packaging material included with the drive should have stated the drive only works in a certain range of motherboards. The product description on the website of the vendor I bought it from should have said the same, too. This is dishonest and meant to increase sales. Now I'm in "wait" mode in order to get an RMA. That's $152.00 tied up with no return on investment for me. This has taught me to check further into compatibility for such products. I think SATA optical drives were discussed in earlier threads here. Pity I didn't pay attention to them. Bob Cochran From zleite at mminternet.com Sat Dec 31 19:30:59 2005 From: zleite at mminternet.com (Z) Date: Sat, 31 Dec 2005 11:30:59 -0800 Subject: OT: K8WE floppy interface question In-Reply-To: <1135818322.2753.60.camel@lioness.protogeek.org> References: <1135818322.2753.60.camel@lioness.protogeek.org> Message-ID: <43B6DC73.4010105@mminternet.com> Robert G. (Doc) Savage wrote: >Would any owner of a Tyan K8WE motherboard please check the floppy >interface connector and tell me if it has all its pins? With my >motherboard oriented so that the CPU and DIMM sockets are farthest away >from me, the pins in positions #2, #3, and #4 missing from the >right-hand column? If yours are not, then the reason my floppy disk >doesn't work (and I can't update my FlashBIOS) is a manufacturing defect >and I have to pursue a warranty replacement. Otherwise... > >TIA > >-- Doc >Robert G. (Doc) Savage, BSE(EE), CISSP, RHCE | Fairview Heights, IL >Fedora Core 4 kernel 2.6.14-1.1653_FC4 on a P-III/M IBM Thinkpad A22p > ** Bob Costas for Baseball Commissioner ** > > > I have a K8W and the only pin missing is the usual key pin, # 4 I think. Z