From bfeist at speakeasy.net Thu Aug 4 17:09:15 2005 From: bfeist at speakeasy.net (Bruce Feist) Date: Thu, 04 Aug 2005 13:09:15 -0400 Subject: Fedora 4 DVD install image missing? Message-ID: <42F24BBB.202@speakeasy.net> I'm about to install Fedora 4 (64 bit version) on a new computer. So, I went to find the DVD image on the RH Fedora site, but found only CD images. Then I looked at the mirrors, and found that some of them had the DVD image as well as the CD images. How can this be? I downloaded a DVD image from one of the mirrors, and burned a DVD... Fedora's CD check during installation reported that it's defective. I'm currently trying again, with an image from a different mirror; I'm not optimistic that it'll work; we'll see. If it doesn't work, I'll either download the CD images from RedHat, or install from a 32-bit install image instead (if I get way too lazy). From b.j.smith at ieee.org Thu Aug 4 17:21:31 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 4 Aug 2005 10:21:31 -0700 (PDT) Subject: Fedora 4 DVD install image missing? In-Reply-To: <42F24BBB.202@speakeasy.net> Message-ID: <20050804172131.88022.qmail@web34110.mail.mud.yahoo.com> Bruce Feist wrote: > I'm about to install Fedora 4 (64 bit version) on a new > computer. So, I went to find the DVD image on the RH Fedora > site, but found only CD images. Then I looked at the > mirrors, and found that some of them had the DVD image as > well as the CD images. How can this be? Many HTTP clients are buggy and do not support >2GiB files. Red Hat avoids having to deal with that support issue by only allowing >2GiB downloads via FTP. At least in those cases, the FTP protocol (which is a file transfer protocol and not a stream protocol like HTTP) would at least give you an error of some sort. > I downloaded a DVD image from one of the mirrors, and > burned a DVD... Fedora's CD check during installation > reported that it's defective. I'm currently trying again, > with an image from a different mirror; I'm not optimistic > that it'll work; we'll see. If it doesn't work, I'll > either download the CD images from RedHat, or install from > a 32-bit install image instead (if I get way too lazy). Is the size of the .iso only ~2147MB? If so, you do not have a HTTP client that is capable of >2GiB downloads. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From berryja at gmail.com Thu Aug 4 17:25:41 2005 From: berryja at gmail.com (Jonathan Berry) Date: Thu, 4 Aug 2005 12:25:41 -0500 Subject: Fedora 4 DVD install image missing? In-Reply-To: <42F24BBB.202@speakeasy.net> References: <42F24BBB.202@speakeasy.net> Message-ID: <8767947e05080410253f7ae632@mail.gmail.com> On 8/4/05, Bruce Feist wrote: > I'm about to install Fedora 4 (64 bit version) on a new computer. So, I > went to find the DVD image on the RH Fedora site, but found only CD > images. Then I looked at the mirrors, and found that some of them had > the DVD image as well as the CD images. How can this be? > > I downloaded a DVD image from one of the mirrors, and burned a DVD... > Fedora's CD check during installation reported that it's defective. I'm > currently trying again, with an image from a different mirror; I'm not > optimistic that it'll work; we'll see. If it doesn't work, I'll either > download the CD images from RedHat, or install from a 32-bit install > image instead (if I get way too lazy). I don't think RedHat keeps the DVD isos on their servers. You should use a mirror. How are you downloading the file? Make sure the app can handle large files. If you can, I would suggest that you use Bittorrent instead. That way you will definitely get a good iso. If the media check fails again, try booting with "cdnodma" or "ide=nodma" (I think those are the flags) as there have been some problems with the media check and dma. If it passes with those flags, then reboot and install without them. Otherwise, use shasum to check the iso and burn it at a slow speed. Media check failing doesn't imply that the iso is bad, just that the disk is bad. Of course if the iso is bad then the disk will be bad too. Jonathan From bfeist at speakeasy.net Thu Aug 4 18:02:14 2005 From: bfeist at speakeasy.net (Bruce Feist) Date: Thu, 04 Aug 2005 14:02:14 -0400 Subject: Fedora 4 DVD install image missing? In-Reply-To: <8767947e05080410253f7ae632@mail.gmail.com> References: <42F24BBB.202@speakeasy.net> <8767947e05080410253f7ae632@mail.gmail.com> Message-ID: <42F25826.7090203@speakeasy.net> Jonathan Berry wrote: >On 8/4/05, Bruce Feist wrote: > > >>I went to find the DVD image on the RH Fedora site, but found only CD >>images. >> >If you can, I would suggest that you use Bittorrent instead. > Sadly, I don't currently have a Linux system that I can trust to do it, and I don't know how to use Bittorent under Windows (if it's possible at all). Thanks for the suggestion, though. From andy at andydeano.co.uk Thu Aug 4 18:08:32 2005 From: andy at andydeano.co.uk (Andy dean) Date: Thu, 4 Aug 2005 19:08:32 +0100 Subject: Fedora 4 DVD install image missing? In-Reply-To: <42F25826.7090203@speakeasy.net> References: <42F24BBB.202@speakeasy.net> <8767947e05080410253f7ae632@mail.gmail.com> <42F25826.7090203@speakeasy.net> Message-ID: <6545EF2B-F7C7-46FF-B08E-E1217CA41EB9@andydeano.co.uk> On 4 Aug 2005, at 19:02, Bruce Feist wrote: > Jonathan Berry wrote: > > >> On 8/4/05, Bruce Feist wrote: >> >> >>> I went to find the DVD image on the RH Fedora site, but found >>> only CD >>> images. >>> >> If you can, I would suggest that you use Bittorrent instead. >> > Sadly, I don't currently have a Linux system that I can trust to do > it, and I don't know how to use Bittorent under Windows (if it's > possible at all). > > Thanks for the suggestion, though. > > -- > amd64-list mailing list > amd64-list at redhat.com > https://www.redhat.com/mailman/listinfo/amd64-list > > > Hi go over to http://azureus.sourceforge.net/ download windows version install, find bittorent file and start download with azureus. Cheers deano From berryja at gmail.com Thu Aug 4 18:11:04 2005 From: berryja at gmail.com (Jonathan Berry) Date: Thu, 4 Aug 2005 13:11:04 -0500 Subject: Fedora 4 DVD install image missing? In-Reply-To: <42F25826.7090203@speakeasy.net> References: <42F24BBB.202@speakeasy.net> <8767947e05080410253f7ae632@mail.gmail.com> <42F25826.7090203@speakeasy.net> Message-ID: <8767947e0508041111361c8891@mail.gmail.com> On 8/4/05, Bruce Feist wrote: > Jonathan Berry wrote: > >On 8/4/05, Bruce Feist wrote: > >>I went to find the DVD image on the RH Fedora site, but found only CD > >>images. > >> > >If you can, I would suggest that you use Bittorrent instead. > > > Sadly, I don't currently have a Linux system that I can trust to do it, > and I don't know how to use Bittorent under Windows (if it's possible at > all). > > Thanks for the suggestion, though. > It is very possible: http://www.bittorrent.com/ Or you can use the nice Java client, Azureus. http://azureus.sourceforge.net/ Both work just fine with Windows. In fact, that is how I've gotten all of my Fedora DVD isos. Just get the install package, install it, and open the torrent file off the web. You might have to tweak some firewall settings if you have a router. Though, I think with Azureus it just worked last time I downloaded some isos. Jonathan From lamont at gurulabs.com Thu Aug 4 19:07:06 2005 From: lamont at gurulabs.com (Lamont R. Peterson) Date: Thu, 4 Aug 2005 13:07:06 -0600 Subject: Fedora 4 DVD install image missing? In-Reply-To: <42F24BBB.202@speakeasy.net> References: <42F24BBB.202@speakeasy.net> Message-ID: <200508041307.10716.lamont@gurulabs.com> On Thursday 04 August 2005 11:09am, Bruce Feist wrote: > I'm about to install Fedora 4 (64 bit version) on a new computer. So, I > went to find the DVD image on the RH Fedora site, but found only CD > images. Then I looked at the mirrors, and found that some of them had > the DVD image as well as the CD images. How can this be? > > I downloaded a DVD image from one of the mirrors, and burned a DVD... > Fedora's CD check during installation reported that it's defective. I'm > currently trying again, with an image from a different mirror; I'm not > optimistic that it'll work; we'll see. If it doesn't work, I'll either > download the CD images from RedHat, or install from a 32-bit install > image instead (if I get way too lazy). Here's another option: 1. Download DVD or CD iso(s) 2. Verify shasums 3. loopback mount iso(s) 4. copy all the content of the DVD/CDs to a directory on a file server (unfortunately, newer versions of NFS servers will not export any of the content of loopback mounted directories successfully, I used to do that and it worked well, but I have not taken the time to figure out how to get the NFS server to do it. Another option is to use unionfs, which can be successfully exported, though I haven't tested it). 5. Export via NFS (and/or FTP/HTTP. 6. Burn a CD using the boot.iso image in the images/ directory on the first CD or on the DVD. 7. Boot your system(s) using the boot.iso CD and do a network install. I have found this to be much better way to install systems. I have done upgrade installs this way for my AMD64 box. You only use 1 CD (and it could be a little 50MB biz-card disc, if you want) and the network installation is MUCH faster than swapping discs or using the DVD. Of course, I also burn a DVD-/+RW disc to carry the newest version of Fedora with me. I would like to find a 2-sided DVD-/+RW disc to burn x86 on one side and AMD64 on the other. -- Lamont R. Peterson Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ] -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From gene at czarc.net Thu Aug 4 19:18:18 2005 From: gene at czarc.net (Gene Czarcinski) Date: Thu, 4 Aug 2005 15:18:18 -0400 Subject: x86_64 SMP kernels In-Reply-To: <200507301241.30517.gene@czarc.net> References: <200507301241.30517.gene@czarc.net> Message-ID: <200508041518.18382.gene@czarc.net> On Saturday 30 July 2005 12:41, Gene Czarcinski wrote: > For systems running the x86_64 SMP kernel, you need to look at > https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=164247 > > I have tested the patch and it appears to fix the OOPS which was causing me > a lot of headaches on my Athlon64 X2 4400+ The latest kernel in FC4 Testing fixes the problem ... 2.6.12-1.1411_FC4 I recommend that anyone with a multiprocessor system apply this update. Gene From berryja at gmail.com Thu Aug 4 19:20:53 2005 From: berryja at gmail.com (Jonathan Berry) Date: Thu, 4 Aug 2005 14:20:53 -0500 Subject: Fedora 4 DVD install image missing? In-Reply-To: <200508041307.10716.lamont@gurulabs.com> References: <42F24BBB.202@speakeasy.net> <200508041307.10716.lamont@gurulabs.com> Message-ID: <8767947e05080412204b16e71f@mail.gmail.com> On 8/4/05, Lamont R. Peterson wrote: > On Thursday 04 August 2005 11:09am, Bruce Feist wrote: > > I'm about to install Fedora 4 (64 bit version) on a new computer. So, I > > went to find the DVD image on the RH Fedora site, but found only CD > > images. Then I looked at the mirrors, and found that some of them had > > the DVD image as well as the CD images. How can this be? > > > > I downloaded a DVD image from one of the mirrors, and burned a DVD... > > Fedora's CD check during installation reported that it's defective. I'm > > currently trying again, with an image from a different mirror; I'm not > > optimistic that it'll work; we'll see. If it doesn't work, I'll either > > download the CD images from RedHat, or install from a 32-bit install > > image instead (if I get way too lazy). > > Here's another option: > > 1. Download DVD or CD iso(s) > 2. Verify shasums > 3. loopback mount iso(s) > 4. copy all the content of the DVD/CDs to a directory on a file server > (unfortunately, newer versions of NFS servers will not export any of the > content of loopback mounted directories successfully, I used to do that and > it worked well, but I have not taken the time to figure out how to get the > NFS server to do it. Another option is to use unionfs, which can be > successfully exported, though I haven't tested it). Really? I updated one Fedora box that doesn't have a DVD drive (CD only) by loopback mounting the DVD iso on a directory and exporting that directory via NFS from my FC4 x86_64 laptop. Worked great. > 5. Export via NFS (and/or FTP/HTTP. > 6. Burn a CD using the boot.iso image in the images/ directory on the first > CD or on the DVD. > 7. Boot your system(s) using the boot.iso CD and do a network install. > > I have found this to be much better way to install systems. I have done > upgrade installs this way for my AMD64 box. You only use 1 CD (and it could > be a little 50MB biz-card disc, if you want) and the network installation is > MUCH faster than swapping discs or using the DVD. I don't know about it being faster. I don't see there is any advantage if your computer has a DVD drive. It still requires you to get the isos, which seems to be the problem right now. Also the OP said he didn't have a Linux computer that he could trust to do bittorrent. I'm not sure what he meant exactly, but I'd say that means he doesn't have one that could do NFS either. > Of course, I also burn a DVD-/+RW disc to carry the newest version of Fedora > with me. I would like to find a 2-sided DVD-/+RW disc to burn x86 on one side > and AMD64 on the other. That would be cool. I have seen a pressed 2-sided DVD, no writables though, not that I've looked. Jonathan From remberson at edgedynamics.com Tue Aug 16 16:56:39 2005 From: remberson at edgedynamics.com (Richard Emberson) Date: Tue, 16 Aug 2005 09:56:39 -0700 Subject: Fedora SMP dual core, dual AMD 64 processor system Message-ID: <43021AC7.9080508@edgedynamics.com> I am interested in building a Fedora SMP dual core, dual AMD 64 bit processor system with, say, 8 Gb of memory which will serve as my development work station. Is there a site that lists such systems/boards known to work with FC4 smp? and/or does this community have any recommendations? Thanks very much. Richard -- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. From jdreese at bucknell.edu Tue Aug 16 19:03:54 2005 From: jdreese at bucknell.edu (Jeremy Dreese) Date: Tue, 16 Aug 2005 15:03:54 -0400 Subject: Defaulting Comilations to 32-bit Message-ID: <4302389A.70703@bucknell.edu> We have an environment with both 64-bit servers and 32-bit desktop systems. Code that is compiled on the servers must be able to run on the desktop systems as well. I know that you can compile in 32-bit mode with the -m32 option, but is there a way to have gcc/g++ compile in 32-bit by default (without specifying the option every time)? Outside of creating some sort of shell alias for all users (i.e. gcc="gcc -m32") I haven't been able to find a way to do this. Any help would be appreciated. Jeremy From whoknows at onlinehome.de Tue Aug 16 20:38:22 2005 From: whoknows at onlinehome.de (Stefan Schroepfer) Date: Tue, 16 Aug 2005 22:38:22 +0200 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <43021AC7.9080508@edgedynamics.com> References: <43021AC7.9080508@edgedynamics.com> Message-ID: <43024EBE.5040104@onlinehome.de> Richard Emberson wrote: > I am interested in building a Fedora SMP dual core, dual AMD 64 bit > processor system > with, say, 8 Gb of memory which will serve as my development work station. Just to be sure: Are you looking for a Dual-Socket-940 system, equipped with two Dual-Core Opteron CPUs (totalling in four CPU cores)? Or is it a Single-Socket-939 System, equipped with one Dual-Core Athlon64 X2 CPU, what you are looking for? In the latter case, it will be difficult (if not impossible) to acquire appropriate memory modules to reach 8 GB of RAM. Single-Socket-939 Mainboards usually have four DIMM sockets, requiring unregistered memory. So far, I didn't hear about available unregistered (DDR-SDRAM-) DIMM modules greater than 1 GB. Regards, Stefan Schroepfer From b.j.smith at ieee.org Tue Aug 16 21:02:10 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 16 Aug 2005 14:02:10 -0700 (PDT) Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <43024EBE.5040104@onlinehome.de> Message-ID: <20050816210210.1385.qmail@web34103.mail.mud.yahoo.com> Stefan Schroepfer wrote: > Just to be sure: Are you looking for a Dual-Socket-940 > system, equipped with two Dual-Core Opteron CPUs (totalling > in four CPU cores)? Or is it a Single-Socket-939 System, > equipped with one Dual-Core Athlon64 X2 CPU, > what you are looking for? Don't forget about Socket-940 Opteron 1xx, including new dual-core Opteron 1x5 models. I know the next generation of Opteron 1xx series will be Socket-939 with unregistered DIMM support, but there are a number of Socket-940 Opteron 1xx mainboards and processors now. Such as the Foxconn NFPIK8AA-8EKRS for about $230+: http://www.foxconnchannel.com/products_motherboard_2.cfm?pName=NFPIK8AA-8EKRS No PCI-X, but it is the full nForce Pro 2200 + 2050 combo. There are full PCIe x16 and a PCIe x4 slot. That's room for PCIe x8 storage and PCIe x4 NIC, good for an entry server (along with 8 SATA channels and 2 GbE ports on-board). > In the latter case, it will be difficult (if not > impossible) to acquire appropriate memory modules to reach > 8 GB of RAM. Single-Socket-939 Mainboards usually have > four DIMM sockets, requiring unregistered memory. And JEDEC specs only allow 1 DIMM per DDR400 channel. You have to drop to DDR333 (or DDR266) to use 2 DIMMs per channel. > So far, I didn't hear about available unregistered > (DDR-SDRAM-) DIMM modules greater than 1 GB. Nope. But with the Foxconn NFPIK8AA-8EKRS, you can use even DDR400 DIMMs (Registered) and achieve 8GiB. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From gene at czarc.net Tue Aug 16 22:00:55 2005 From: gene at czarc.net (Gene Czarcinski) Date: Tue, 16 Aug 2005 18:00:55 -0400 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <43021AC7.9080508@edgedynamics.com> References: <43021AC7.9080508@edgedynamics.com> Message-ID: <200508161800.56019.gene@czarc.net> On Tuesday 16 August 2005 12:56, Richard Emberson wrote: > I am interested in building a Fedora SMP dual core, dual AMD 64 bit > processor system > with, say, 8 Gb of memory which will serve as my development work station. > > Is there a site that lists such systems/boards known to work with FC4 smp? > and/or > does this community have any recommendations? I purchased a Athlon64 X2 4400+ and an ABIT AN8 SLI motherboard from http://www.monarchcomputer.com/ 1. their prices are not unreasonable 2. they seem to have a good handle on what motherboards work with the X2 processors Gene From b.j.smith at ieee.org Tue Aug 16 23:38:18 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 16 Aug 2005 16:38:18 -0700 (PDT) Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <200508161800.56019.gene@czarc.net> Message-ID: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> Gene Czarcinski wrote: > I purchased a Athlon64 X2 4400+ and an ABIT AN8 SLI > motherboard from http://www.monarchcomputer.com/ > 1. their prices are not unreasonable > 2. they seem to have a good handle on what motherboards > work with the X2 processors Monarch Computer is AMD's #1 Tier-2/Whitebox OEM. They get stuff before other people, and they know how to build boxen well -- at least for what is available in retail mainboards. For servers, I/O is the key. You have to be careful with many mainboards because vendors cut I/O for cost in traces, etc... E.g., http://www.samag.com/documents/sam0411b/0411b_f6.htm [ Full article with 7 diagrams of PC design, including Opteron: http://www.samag.com/documents/sam0411b/ ] The nForce4 chipset, like in the new crop of Socket-939 solutions, are clearly desktop/workstation. The nForce Pro 2200 and, optional, 2050 (2200+2050) are more workstation/server designed, and found in even the single Socket-940 Foxconn mainboard I posted. But even then, all versions of the nForce series lacks PCI-X, which is a problem for servers right now. Because if you want server I/O, you want PCI-X right now. There are very few (if any?) mainboards with a single Socket-940 that has a AMD8131/8132 IC for dual-channel PCI-X 1.0/2.0. And even some dual-Socket-940 mainboards lack one. I was hopeful the new Broadcom/ServerWorks HT1000 chipset would take off. It's a low-cost, single IC chipset with a single PCI-X channel -- ideal for delivering a single Socket-940 with decent server I/O for <$200. Unfortunately, I've only seen it implemented with the HT2000 (HT2000+HT1000), which is its optional bigger brother with PCIe channels on dual Socket-940 for $500+. I might as well go with a nForce Pro 2200+2050 + AMD8131 like the Tyan S2895 instead for the same price. Although PCIe is definitely good for storage and other I/O as well as video, the only "intelligent" RAID storage controller I know of for PCIe is the LSI Logic MegaRaid 320-2E (2-channel U320, PCIe x8 card). It's actually using the IOP332 which is a "hack" of the IOP331 with a PCI-X to PCIe bridge (not ideal). Now there are some PCIe cards "in the works." A new series of RAID cards should show up using the Broadcom BCM8603 soon. It's an 8-channel SAS (8m, 300MBps Serial Attached SCSI, also naturally capable of 1m, 300MBps SATA-IO**) hardware RAID controller that can arbitrate _directly_ to either PCI-X or PCIe x8 (and can even bridge between the two for more embedded solutions) and up to 768MB of DRAM. It's not like Broadcom's current "software" driver RAIDCore PCI-X cards, it's a true, intelligent IC for $60 in quantity (meaning boards should be ~$300+). And it's universal SAS/SATA and PCI-X/PCIe support makes it an "universal solution" for all to use. So the question is what I/O do you need now? The Foxconn can definitely handle a lot of I/O, but it's only PCIe. That's good for getting new PCIe x4 server NICs, but the PCIe x8 storage NICs are virtually non-existant right now. I'm hoping that changes soon with the BCM8603 IC being adopted, but I haven't heard a thing yet. Which means that PCI-X is probably your best bet for servers. The good news is that Monarch Computer _does_ have some older dual Socket-940 Tyan mainboards with the AMD8131 for as little as $300. But whether they support the new x2 processors, I don't know, and they probably don't. So you're going to spend a bit more for them -- at least until someone releases a good, low-cost, single Socket-940 with the Broadcom HT1000 (if ever). -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From b.j.smith at ieee.org Tue Aug 16 23:43:37 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 16 Aug 2005 16:43:37 -0700 (PDT) Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> Message-ID: <20050816234337.4910.qmail@web34104.mail.mud.yahoo.com> "Bryan J. Smith" wrote: > ... Broadcom BCM8603 soon. It's an 8-channel SAS (8m, > 300MBps Serial Attached SCSI, also naturally capable of 1m, > 300MBps SATA-IO**) hardware RAID controller that can > arbitrate _directly_ to either PCI-X or PCIe x8 (and can > even bridge between the two for more embedded solutions) > and up to 768MB of DRAM. Didn't finish a thought there ... **NOTE: SATA-II != SATA-IO = 3GHz (300MBps) SATA-IO is now the _official_ name for 3GHz/300MBps (and, eventually, 600MBps) Serial ATA (SATA). It requires the use of a twisted pair cable and other signaling. SATA-II has become a "marketing name" that does _not_ guarantee 300MBps, and might only be 150MBps. So consider SATA-II much like USB 2.0 -- you have to actually have a SATA-IO capable SATA-II controller to get 3GHz (300MBps), much like you have to have an EHCI capable USB 2.0 controller to get 480Mbps (60MBps). -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From bill at cse.ucdavis.edu Wed Aug 17 00:39:53 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 16 Aug 2005 17:39:53 -0700 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> References: <200508161800.56019.gene@czarc.net> <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> Message-ID: <20050817003953.GC8459@cse.ucdavis.edu> > The nForce4 chipset, like in the new crop of Socket-939 > solutions, are clearly desktop/workstation. The nForce Pro > 2200 and, optional, 2050 (2200+2050) are more > workstation/server designed, and found in even the single > Socket-940 Foxconn mainboard I posted. But even then, all Hrm, I've heard the nforce4 and 2200 are the same silicon, just a few tweaks in the packaging. Much like the opteron/AMD64 difference. > versions of the nForce series lacks PCI-X, which is a problem > for servers right now. Why? > Because if you want server I/O, you want PCI-X right now. > There are very few (if any?) mainboards with a single > Socket-940 that has a AMD8131/8132 IC for dual-channel PCI-X > 1.0/2.0. And even some dual-Socket-940 mainboards lack one. Correct, although how many servers really need more than 1GB/sec, thats quite a bit even with say 16 drives hooked up. > Although PCIe is definitely good for storage and other I/O as > well as video, the only "intelligent" RAID storage controller > I know of for PCIe is the LSI Logic MegaRaid 320-2E > (2-channel U320, PCIe x8 card). It's actually using the > IOP332 which is a "hack" of the IOP331 with a PCI-X to PCIe > bridge (not ideal). Sure using a bridge is not ideal, some extra transistors, and probably a few 100 ns delay or similar. But other than aesthetics why should anyone care? Intel makes reliable raid controllers and many a raid card uses them to provide what the market wants. Seems rather strange to advocate pci-x just because a popular pci-e solution uses a bridge. The migration is happening there are both AGP video cards with a pci-e bridge, and pci-e video cards with an agp bridge... who cares? Even with video cards that are much more sensitive to bandwidth and latency there is no practical difference. Keep in mind the IOP has an internal bridge from what I can tell, the video cards I'm familiar with have an external bridge. > Now there are some PCIe cards "in the works." A new series > of RAID cards should show up using the Broadcom BCM8603 soon. > It's an 8-channel SAS (8m, 300MBps Serial Attached SCSI, > also naturally capable of 1m, 300MBps SATA-IO**) hardware > RAID controller that can arbitrate _directly_ to either PCI-X > or PCIe x8 (and can even bridge between the two for more > embedded solutions) and up to 768MB of DRAM. It's not like > Broadcom's current "software" driver RAIDCore PCI-X cards, > it's a true, intelligent IC for $60 in quantity (meaning > boards should be ~$300+). And it's universal SAS/SATA and > PCI-X/PCIe support makes it an "universal solution" for all > to use. Personally I'd rather have JBOD, I've yet to see a Raid controller faster than software RAID. I also like the standardized interface so if I have a few dozen servers I don't have to track the functionality I want via different command line tools, serial ports, web interfaces, front panels, and even custom window interfaces. Not to mention the biggie, what happens if the raid card dies... being able to migrate to a random collection of hardware can be quite useful in an emergency. > So the question is what I/O do you need now? The Foxconn can > definitely handle a lot of I/O, but it's only PCIe. That's > good for getting new PCIe x4 server NICs, but the PCIe x8 > storage NICs are virtually non-existant right now. I'm I bought one, many vendors sell them, what is the big deal? Megaraid, tekram, lsi-logic, promise, even straight from intel if you want. Of course many more are coming from the likes of ICP-vortex, Adaptec, and just about anyone who wants to relabel and market an adapter in this space, > hoping that changes soon with the BCM8603 IC being adopted, > but I haven't heard a thing yet. > > Which means that PCI-X is probably your best bet for servers. Why? Slower, higher latency, lower scaling, and lower bandwidth. Not to mention the chipset quality has gone up. I.e. even the same chip (pci-x or pci-e with or without a bridge) tends to be faster/lower latency under pci-e than pci-x. From what I can tell it's mostly the quality of the new nvidia chipset that is the difference. Interconnect vendors (again more sensitive to latency and bandwidth) are quite excited to be posting new numbers with the nvidia chipset. Not that it matters that much for a collection of 8-16 30-60MB disks on a server. -- Bill Broadley Computational Science and Engineering UC Davis From maurice at harddata.com Wed Aug 17 02:19:56 2005 From: maurice at harddata.com (Maurice Hilarius) Date: Tue, 16 Aug 2005 20:19:56 -0600 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> Message-ID: <43029ECC.3000509@harddata.com> Bryan J. Smith wrote: > >Monarch Computer is AMD's #1 Tier-2/Whitebox OEM. They get >stuff before other people, and they know how to build boxen >well -- at least for what is available in retail mainboards. > > Actually AMD is quite fair with all of their "rollout partners" and provide very good support on products and availability. >For servers, I/O is the key. You have to be careful with >many mainboards because vendors cut I/O for cost in traces, >etc... E.g., > http://www.samag.com/documents/sam0411b/0411b_f6.htm >[ Full article with 7 diagrams of PC design, including >Opteron: http://www.samag.com/documents/sam0411b/ ] > >The nForce4 chipset, like in the new crop of Socket-939 >solutions, are clearly desktop/workstation. The nForce Pro >2200 and, optional, 2050 (2200+2050) are more >workstation/server designed, and found in even the single >Socket-940 Foxconn mainboard I posted. But even then, all >versions of the nForce series lacks PCI-X, which is a problem >for servers right now. > > I disagree. For some examples: Tyan S2891, S2892, and S2895 boards, all dual socket 940, nVidia chipset. http://www.tyan.com/products/html/opteron.html Asus K8N-DL, and K8N-DRE http://usa.asus.com/prog/spec.asp?m=K8N-DL&langs=09 K8N-DRE is not yet on ASUS website, but is shipping to builders next week. Arima SW-300 and SW-310 http://www.amdboard.com/arima_sw310.html http://www.amdboard.com/arima_sw300.html SuperMicro A+Opteron motherbpoard series include an nForce solution, called H8DCE http://www.supermicro.com/Aplus/motherboard/ >Because if you want server I/O, you want PCI-X right now. >There are very few (if any?) mainboards with a single >Socket-940 that has a AMD8131/8132 IC for dual-channel PCI-X >1.0/2.0. And even some dual-Socket-940 mainboards lack one. > > > While this is the case for some types of cards, PCI Express will easily beat it, given availability of boards. That is why the newest Myrinet, and Infiniband cards are all PCI-Express. >I was hopeful the new Broadcom/ServerWorks HT1000 chipset >would take off. It's a low-cost, single IC chipset with a >single PCI-X channel -- ideal for delivering a single >Socket-940 with decent server I/O for <$200. Unfortunately, >I've only seen it implemented with the HT2000 >(HT2000+HT1000), which is its optional bigger brother with >PCIe channels on dual Socket-940 for $500+. I might as well >go with a nForce Pro 2200+2050 + AMD8131 like the Tyan S2895 >instead for the same price. > > > The Serverworks boards are just launching now. More detail is not possible at this time due to non-disclosure agreement constraints. Wait a couple of weeks and look again at Arima and Supermicro/A+ >Although PCIe is definitely good for storage and other I/O as >well as video, the only "intelligent" RAID storage controller >I know of for PCIe is the LSI Logic MegaRaid 320-2E >(2-channel U320, PCIe x8 card). It's actually using the >IOP332 which is a "hack" of the IOP331 with a PCI-X to PCIe >bridge (not ideal). > >Now there are some PCIe cards "in the works." A new series >of RAID cards should show up using the Broadcom BCM8603 soon. > It's an 8-channel SAS (8m, 300MBps Serial Attached SCSI, >also naturally capable of 1m, 300MBps SATA-IO**) hardware >RAID controller that can arbitrate _directly_ to either PCI-X >or PCIe x8 (and can even bridge between the two for more >embedded solutions) and up to 768MB of DRAM. It's not like >Broadcom's current "software" driver RAIDCore PCI-X cards, >it's a true, intelligent IC for $60 in quantity (meaning >boards should be ~$300+). And it's universal SAS/SATA and >PCI-X/PCIe support makes it an "universal solution" for all >to use. >... >Which means that PCI-X is probably your best bet for servers. > The good news is that Monarch Computer _does_ have some >older dual Socket-940 Tyan mainboards with the AMD8131 for as >little as $300. But whether they support the new x2 >processors, I don't know, and they probably don't. So you're >going to spend a bit more for them -- at least until someone >releases a good, low-cost, single Socket-940 with the >Broadcom HT1000 (if ever). > > > You work for Monarch or something? Lots of vendors have p[roper soilutions. However, as was earlier mentioned, if you want more than 4GB RAM, one should consider Socket 940 boards. As for PCI-X, yes, that is useful, and so is PCI-Express, so buy a Tyan or SuperMicro or ASUS, or Arima board, and get both! -- With our best regards, Maurice W. Hilarius Telephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:maurice at harddata.com Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From remberson at edgedynamics.com Thu Aug 18 14:14:41 2005 From: remberson at edgedynamics.com (Richard Emberson) Date: Thu, 18 Aug 2005 07:14:41 -0700 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> Message-ID: <430497D1.8030305@edgedynamics.com> Bryan, your article was very interesting. Before one buys a motherboard one would like to see its the system component interconnect architecture diagram (as appears in your article) just to make sure they are not cutting corners, etc. Can I assume that Figure 7 in the article is the Tyan S2895? The system component interconnect architecture shown in Figure 7 looks like just what I want. Thanks Richard Bryan J. Smith wrote: >Gene Czarcinski wrote: > > >>I purchased a Athlon64 X2 4400+ and an ABIT AN8 SLI >>motherboard from http://www.monarchcomputer.com/ >>1. their prices are not unreasonable >>2. they seem to have a good handle on what motherboards >>work with the X2 processors >> >> > >Monarch Computer is AMD's #1 Tier-2/Whitebox OEM. They get >stuff before other people, and they know how to build boxen >well -- at least for what is available in retail mainboards. > >For servers, I/O is the key. You have to be careful with >many mainboards because vendors cut I/O for cost in traces, >etc... E.g., > http://www.samag.com/documents/sam0411b/0411b_f6.htm >[ Full article with 7 diagrams of PC design, including >Opteron: http://www.samag.com/documents/sam0411b/ ] > >The nForce4 chipset, like in the new crop of Socket-939 >solutions, are clearly desktop/workstation. The nForce Pro >2200 and, optional, 2050 (2200+2050) are more >workstation/server designed, and found in even the single >Socket-940 Foxconn mainboard I posted. But even then, all >versions of the nForce series lacks PCI-X, which is a problem >for servers right now. > >Because if you want server I/O, you want PCI-X right now. >There are very few (if any?) mainboards with a single >Socket-940 that has a AMD8131/8132 IC for dual-channel PCI-X >1.0/2.0. And even some dual-Socket-940 mainboards lack one. > >I was hopeful the new Broadcom/ServerWorks HT1000 chipset >would take off. It's a low-cost, single IC chipset with a >single PCI-X channel -- ideal for delivering a single >Socket-940 with decent server I/O for <$200. Unfortunately, >I've only seen it implemented with the HT2000 >(HT2000+HT1000), which is its optional bigger brother with >PCIe channels on dual Socket-940 for $500+. I might as well >go with a nForce Pro 2200+2050 + AMD8131 like the Tyan S2895 >instead for the same price. > >Although PCIe is definitely good for storage and other I/O as >well as video, the only "intelligent" RAID storage controller >I know of for PCIe is the LSI Logic MegaRaid 320-2E >(2-channel U320, PCIe x8 card). It's actually using the >IOP332 which is a "hack" of the IOP331 with a PCI-X to PCIe >bridge (not ideal). > >Now there are some PCIe cards "in the works." A new series >of RAID cards should show up using the Broadcom BCM8603 soon. > It's an 8-channel SAS (8m, 300MBps Serial Attached SCSI, >also naturally capable of 1m, 300MBps SATA-IO**) hardware >RAID controller that can arbitrate _directly_ to either PCI-X >or PCIe x8 (and can even bridge between the two for more >embedded solutions) and up to 768MB of DRAM. It's not like >Broadcom's current "software" driver RAIDCore PCI-X cards, >it's a true, intelligent IC for $60 in quantity (meaning >boards should be ~$300+). And it's universal SAS/SATA and >PCI-X/PCIe support makes it an "universal solution" for all >to use. > >So the question is what I/O do you need now? The Foxconn can >definitely handle a lot of I/O, but it's only PCIe. That's >good for getting new PCIe x4 server NICs, but the PCIe x8 >storage NICs are virtually non-existant right now. I'm >hoping that changes soon with the BCM8603 IC being adopted, >but I haven't heard a thing yet. > >Which means that PCI-X is probably your best bet for servers. > The good news is that Monarch Computer _does_ have some >older dual Socket-940 Tyan mainboards with the AMD8131 for as >little as $300. But whether they support the new x2 >processors, I don't know, and they probably don't. So you're >going to spend a bit more for them -- at least until someone >releases a good, low-cost, single Socket-940 with the >Broadcom HT1000 (if ever). > > > > -- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. From b.j.smith at ieee.org Thu Aug 18 16:16:59 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 18 Aug 2005 09:16:59 -0700 (PDT) Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <430497D1.8030305@edgedynamics.com> Message-ID: <20050818161659.49081.qmail@web34105.mail.mud.yahoo.com> Richard Emberson wrote: > Bryan, your article was very interesting ... It's the 2nd (of currently 3) article in my "Dissecting" series. 2004 April was on ATA RAID Options. 2005 September (now) is on Virtual Tape Libraries (VTLs). > Can I assume that Figure 7 in the article is the Tyan > S2895? Actually Figure 7 is of a non-existant design. This is because mainboard vendors don't like to limit I/O if only one CPU is used. The CK04 Pro is the nVidia Pro 2200. There is also the nVidia Pro 2050. And then the AMD8131 (and 8132). My Figure 7 only used the first (CPU#0) and last (CPU #1). The Tyan S2895 uses all 3. On the Tyan S2895, the 2200 and 8131 are attached to CPU#0. The 2050 is attached to CPU#1. As far as 4-way designs go, Figure 5 is the HP DL585. There are two (2) AMD8131 ICs, one attached to half of the Opterons. Some on-board peripherals might be attached to some. The Sun SunFire V40z is actually a bit better (based on another company's design) in the I/O department (the management IC is another story compared to the HP ;-). It uses three (3) AMD8131 ICs, two attached to one Opteron, and one attached to another Opteron. In the case of the two attached to one Opteron -- the resulting four (4) PCI-X channels are single 133MHz PCI-X channels. That's 4 slots of dedicated 1GBps channels of PCI-X I/O on one CPU. The other PCI-X channels are used for on-board peripherals and slower 66MHz slots. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From b.j.smith at ieee.org Thu Aug 18 16:26:12 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 18 Aug 2005 09:26:12 -0700 (PDT) Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050818161659.49081.qmail@web34105.mail.mud.yahoo.com> Message-ID: <20050818162612.81522.qmail@web34114.mail.mud.yahoo.com> "Bryan J. Smith" wrote: > Actually Figure 7 is of a non-existant design. BTW, just some notes ... 1. The article, including figures, were written in 2004 July 2. The article was published in 2004 October (November issue) 3. The nForce Pro 2200 + 2050 were not announced until _after_ the article came out The design itself was based on the rumored "CK04 Pro" IWill mainboard shown in Tawain around the time I finished the article. It only uses the nForce Pro 2200 + AMD8131, but puts them all on CPU#0. The nForce Pro 2200 + AMD8131 on CPU#0 with nForce Pro 2050 on CPU#1, is the common implementation in the Tyan S2895. There is only one other chip/chipset option I know of that has PCI-X slots, the Broadcom/ServerWorks HT1000/2000 series. But I've only seen one implementation of it, and it's always with the HT1000+2000 combination (both on the same CPU? Or different? I actually don't know). Frankly, I'd like to see a $150-200 HT1000 single Socket-940 Opteron 1xx mainboard. That would be a "killer" entry-level server board. -- Bryan J. Smith | Sent from Yahoo Mail mailto:b.j.smith at ieee.org | (please excuse any http://thebs413.blogspot.com/ | missing headers) From b.j.smith at ieee.org Fri Aug 19 01:48:39 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 18 Aug 2005 20:48:39 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <43029ECC.3000509@harddata.com> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <43029ECC.3000509@harddata.com> Message-ID: <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> On Tue, 2005-08-16 at 20:19 -0600, Maurice Hilarius wrote: > Actually AMD is quite fair with all of their "rollout partners" and > provide very good support on products and availability. Yes, I know. AMD is actually meeting demand, hence their lawsuit against Intel when they are artificially reducing shipments at HP, IBM, etc... Simply put, HP has a backlog that is _not_ due to availability. ;-> > I disagree. > For some examples: ... cut ... I said the nVidia nForce chipsets themselves _lack_ PCI-X. Yes, some vendors are tunneling in the AMD8131 (or AMD8132). But my point was that _increases_cost_. I wasn't saying vendors who add the AMD8131 are bad. I was just saying the design _increases_cost_. > While this is the case for some types of cards, PCI Express will > easily beat it, given availability of boards. > That is why the newest Myrinet, and Infiniband cards are all PCI- > Express. Yes, but the PCIe storage cards are _no_where_. That's the problem. I like the HT-1000 because it's a single chip, probably low cost, and it gives you _both_ a PCI-X channel _and_ a PCIe x8 in a single chip. That would make for a powerful, but inexpensive, "entry-level" server. Now only if the products were available. ;-> > You work for Monarch or something? No. They are the only ones I have dealt with though. > Lots of vendors have p[roper soilutions. I would never suggest otherwise, but there are also a lot of system assemblers and, even more so, consultants who don't. They are selling non-PCI-X systems, and putting in storage controllers on the legacy PCI bus (yikes!). > However, as was earlier mentioned, if you want more than 4GB RAM, one > should consider Socket 940 boards. > As for PCI-X, yes, that is useful, and so is PCI-Express, so buy a > Tyan or SuperMicro or ASUS, or Arima board, and get both! As someone who has rolled out S2885 and, more recently, S2895 systems to clients, I agree. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Fri Aug 19 02:04:34 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 18 Aug 2005 21:04:34 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050817003953.GC8459@cse.ucdavis.edu> References: <200508161800.56019.gene@czarc.net> <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <20050817003953.GC8459@cse.ucdavis.edu> Message-ID: <1124417074.6072.50.camel@bert64.mobile.smithconcepts.com> On Tue, 2005-08-16 at 17:39 -0700, Bill Broadley wrote: > Why? Please point out a _true_, intelligent RAID card for PCIe? The only one I know of is the LSI Logic MegaRAID 320-3E. And even it is using the Intel IOP332, which is really a IOP331 with an internal bridge from PCI-X to PCIe. Until the Broadcom BCM8603 and other microntroller/ASIC solutions appear, there are virtually _no_ intelligent RAID cards out there for PCIe. > Correct, although how many servers really need more than 1GB/sec, > thats quite a bit even with say 16 drives hooked up. That's if you _have_ PCI-X (or 0.5GBps out of 64-bit PCI 66MHz). If you don't, all you get is 133MBps of shared, legacy 32-bit PCI. I have seen too many small system assemblers and consultants putting in PCI64/PCI-X storage controllers on the shared, legacy 32-bit PCI. That wasn't even good 6 years ago! ;-> > Intel makes reliable raid controllers and many a raid card uses them to > provide what the market wants. I think you mean Intel makes reliable RAID microcontrollers. Yes, the newer XScale (IOP32x/33x) are great. I wouldn't say otherwise. [ BTW, the older i960 (IOP30x) are rather lackluster today though. But people still buy products based on them, and wonder why they can't break 50-60MBps. ] > Seems rather strange to advocate pci-x just because a popular pci-e > solution uses a bridge. No, what I said is that it's the _only_ solution on the market. There are _0_ others. My comments on its being a hack were just added comment, not my main argument. > The migration is happening there are both AGP video cards with a pci-e > bridge, and pci-e video cards with an agp bridge... who cares? I'm not talking about video cards, I'm talking about _server_ I/O. And, BTW, there _are_ performance/reliability issues with the AGP-PCIe bridges -- just FYI. On a desktop, that's not an issue. On a server, that can be. > Even with video cards that are much more sensitive > to bandwidth and latency there is no practical difference. Again, I don't care about reliability of a desktop. I'm talking a server. There is a _darth_ of PCIe storage controllers, and the only one I know of isn't even native. I'm _not_ saying the LSI Logic MegaRAID 320-2E is a "bad" solution. I said it's the _only_ solution available, and it's not even native. > Keep in mind the IOP has an internal bridge from what I can tell, the > video cards I'm familiar with have an external bridge. Please stop talking about video cards, they have *0* to do with what I'm talking about. Again, there is virtually _no_ RAID solution for PCIe other than the LSI Logic MegaRAID 320-2E right now. That means either you have to pick it, or you need to get a mainboard > Personally I'd rather have JBOD, I've yet to see a Raid controller > faster than software RAID. I'm not going there, but I disagree. I don't like duplicating RAID-1 streams over the interconnect, and I definitely don't like cramming all my disk I/O through my CPU interconnect to do RAID-5 (it's not the XOR calculations that cost you performance, it's the tie-up of your interconnect). > I also like the standardized interface so if I have a few dozen > servers I don't have to track the functionality I want via different > command line tools, serial ports, web interfaces, front panels, and > even custom window interfaces. ??? What solutions have you been using ??? I've been using 3Ware Escalade ASIC-based and StrongARM/X-Scale based MegaRAID intelligent RAID solutions. I _avoid_ external subsystems (except for FC-AL), and I avoid vendors who don't support Linux. 3Ware and Mylex/LSI have had excellent track records with Linux support, as well as volume compatibility with newer cards. > Not to mention the biggie, what happens if the raid card dies... being > able to migrate to a random collection of hardware can be quite useful > in an emergency. Well, if you have standardized on 3Ware for your ATA, it's a no brainer, you use another card. Mylex/LSI have also been quite good on the SCSI side. I find this argument insignificant _if_ you choose a good vendor with a solid history of volume upgrade path. It's far better than dealing with all the MD/LVM issues in Linux, no offense, but damn that's biting a big chunk of my ass off too many times, and I just won't trust it. ;-> > I bought one, many vendors sell them, what is the big deal? Megaraid, > tekram, lsi-logic, promise, even straight from intel if you want. > Of course many more are coming from the likes of ICP-vortex, Adaptec, > and just about anyone who wants to relabel and market an adapter in > this space, What PCIe RAID cards (and I'm not talking software driver RAID) don't I know about? The only one I know of is the LSI Logic MegaRAID 320-2E. So far, all you've talked about is video cards. > Why? Slower, higher latency, lower scaling, and lower bandwidth. Whoa! Higher latency? Not true at all. Serial can increase latency over parallel, but it makes up for it in dedicated throughput. Don't mix concepts, there are some trade-offs with serial (but they are typically overcome with better designs). > Not to mention the chipset quality has gone up. I.e. even the same chip (pci-x > or pci-e with or without a bridge) tends to be faster/lower latency > under pci-e than pci-x. From what I can tell it's mostly the quality > of the new nvidia chipset that is the difference. Interconnect vendors > (again more sensitive to latency and bandwidth) are quite excited to be > posting new numbers with the nvidia chipset. > Not that it matters that much for a collection of 8-16 30-60MB disks > on a server. Again, what PCIe storage cards don't I know about? -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Fri Aug 19 02:24:03 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 18 Aug 2005 21:24:03 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050816210210.1385.qmail@web34103.mail.mud.yahoo.com> References: <20050816210210.1385.qmail@web34103.mail.mud.yahoo.com> Message-ID: <1124418243.6072.69.camel@bert64.mobile.smithconcepts.com> On Tue, 2005-08-16 at 14:02 -0700, Bryan J. Smith wrote: > Don't forget about Socket-940 Opteron 1xx, including new > dual-core Opteron 1x5 models. I know the next generation of > Opteron 1xx series will be Socket-939 with unregistered DIMM > support, but there are a number of Socket-940 Opteron 1xx > mainboards and processors now. BTW, here's more on the AMD Opteron 100 series for Socket-939 using unregistered ECC memory: http://www.amd.com/us-en/Processors/SellAMDProducts/0,,30_177_861_8806~85257,00.html On Tue, 2005-08-16 at 16:38 -0700, Bryan J. Smith wrote: > I was hopeful the new Broadcom/ServerWorks HT1000 chipset > would take off. It's a low-cost, single IC chipset with a > single PCI-X channel -- ideal for delivering a single > Socket-940 with decent server I/O for <$200. This includes possibly the HT1000 for Socket-939 Opteron 100 (as well as Socket-940). -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From bill at cse.ucdavis.edu Fri Aug 19 05:14:09 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Thu, 18 Aug 2005 22:14:09 -0700 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <43029ECC.3000509@harddata.com> <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> Message-ID: <20050819051409.GA30564@cse.ucdavis.edu> > > Yes, but the PCIe storage cards are _no_where_. That's the problem. Er, umm, you mean besides lsi-logic, promise, intel, SIIG, megaraid, tekram, and friends. You might want to check some obscure places like buy.com, amazon.com, or newegg.com > I like the HT-1000 because it's a single chip, probably low cost, and it > gives you _both_ a PCI-X channel _and_ a PCIe x8 in a single chip. That > would make for a powerful, but inexpensive, "entry-level" server. Er, there are plenty of cheap solutions out there, there are pci-e sata cards starting at $80 ish with the silicon image chipset on it. It's fairly easy to buy single socket pci-e motherboards with 8 sata ports for under $200. > Now only if the products were available. ;-> Er, I bought a fileserver with one, and the desktop pci-e board I have had for 6 months or so has 8 sata ports. All the big popular hardware distributors seem to have pci-e raid cards. > I would never suggest otherwise, but there are also a lot of system > assemblers and, even more so, consultants who don't. They are selling > non-PCI-X systems, and putting in storage controllers on the legacy PCI > bus (yikes!). Yeah, definitely although many of the hardware raid cards can't manage faster than legacy PCI bandwidth anyways. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Fri Aug 19 05:44:17 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 19 Aug 2005 00:44:17 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050819051409.GA30564@cse.ucdavis.edu> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <43029ECC.3000509@harddata.com> <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> <20050819051409.GA30564@cse.ucdavis.edu> Message-ID: <1124430257.6072.136.camel@bert64.mobile.smithconcepts.com> On Thu, 2005-08-18 at 22:14 -0700, Bill Broadley wrote: > Er, umm, you mean besides lsi-logic, The $600 LSI Logic MegaRAID 320-2E, yes, that's the only one. > promise, Could you please point out the intelligent, hardware RAID card that Promise makes for PCIe? Or do you mean "dumb ATA channels with a 16-bit only BIOS" (and 100% driver RAID) aka what I call "FRAID" (fake RAID)? FRAID is not feasible with Linux, it never will be. It's better to use the "dumb ATA" channels with MD/LVM. But I'd much rather have an intelligent, hardware RAID solution. > intel, Product? Or do you mean the IOP332 XScale microcontroller in use by the LSI Logic card? Or another product? > SIIG, Let me guess, High Point Technologies (HPT) or Silicon Image "FRAID"? > megaraid, That's LSI Logic. Don't reuse the product as a vendor. > tekram, Product? Or HPT/SilImage FRAID? > and friends. Products? I've seen a _lot_ of "dumb ATA" channels that do FRAID. Nothing intelligent. > You might want to check some obscure places like buy.com, amazon.com, or > newegg.com Everything I've seen is FRAID -- "dumb ATA" with 0 intelligence on- board. I would be interested if you've found something else though. > Er, there are plenty of cheap solutions out there, there are pci-e sata > cards starting at $80 ish with the silicon image chipset on it. FRAID. That's _not_ hardware RAID. > It's fairly easy to buy single socket pci-e motherboards with 8 sata ports for > under $200. Yes, and use software RAID. I want intelligent, hardware RAID. I've got *1* product choice, the LSI Logic MegaRAID 320-2E. > Er, I bought a fileserver with one, and the desktop pci-e board I have had > for 6 months or so has 8 sata ports. All the big popular hardware > distributors seem to have pci-e raid cards. I want intelligent, hardware RAID in PCIe. I know only 1 product. If you know more, please tell me. But don't tell me about FRAID. > Yeah, definitely although many of the hardware raid cards can't manage > faster than legacy PCI bandwidth anyways. Then you've never used a 3Ware Escalade 7000/8000 64-bit ASIC product or an LSI Logic MegaRAID "X" series Intel XScale microcontroller product. Besides, pushing _all_ I/O over a _single_, _shared_, _legacy_ was _not_ viable back in 2000, let alone now (5 years later). I used to specialize in ripping out i440BX/GX chipset mainboards and putting in ServerWorks ServerSet IIILE and IIIHE chipset mainboards for $500-750 and companies would instantly see 3x the performance with their _existing_ RAID controllers and GbE cards. And that was circa 2000. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From remberson at edgedynamics.com Tue Aug 23 14:15:16 2005 From: remberson at edgedynamics.com (Richard Emberson) Date: Tue, 23 Aug 2005 07:15:16 -0700 Subject: AMD 64 java heap size Message-ID: <430B2F74.9010607@edgedynamics.com> If one needs to have Java heap sizes larger than 4Gb is there a specail fedora kernel that needs to be used when using an AMD 64 bit machine? Thanks RME -- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. From arjanv at redhat.com Tue Aug 23 14:19:43 2005 From: arjanv at redhat.com (Arjan van de Ven) Date: Tue, 23 Aug 2005 16:19:43 +0200 Subject: AMD 64 java heap size In-Reply-To: <430B2F74.9010607@edgedynamics.com> References: <430B2F74.9010607@edgedynamics.com> Message-ID: <1124806783.3218.15.camel@laptopd505.fenrus.org> On Tue, 2005-08-23 at 07:15 -0700, Richard Emberson wrote: > If one needs to have Java heap sizes larger than 4Gb is there a specail > fedora kernel that needs to be used when using an AMD 64 bit machine? no just make sure you have a 64 bit jvm! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From cochranb at speakeasy.net Tue Aug 23 23:59:18 2005 From: cochranb at speakeasy.net (Robert L Cochran) Date: Tue, 23 Aug 2005 19:59:18 -0400 Subject: MSI K8N Neo2 For Dual Core? Message-ID: <430BB856.4040100@speakeasy.net> I finally found the courage to upgrade my MSI K8N Neo2 Platinum (MS-7025) BIOS to version 1.9 which I believe supports AMD 64 dual-core CPUs. I'm thinking of actually buying such a CPU. Does it makes more sense to replace this motherboard with a new one that might be better suited to dual-cores? I have 2 Gb of fairly cheap memory, but it bothers me that with every change of a motherboard, I have to put down a large sum to get a fresh set of memory, too. So I'd like to avoid buying a complete set of memory if possible. I'm currently using Adata memory. Am I better off staying with my current board or should I trade it for something else? And yes, I have been reading Bryan's postings and the responses to them with interest, but haven't studied them deeply. Thanks Bob Cochran From lamont at gurulabs.com Wed Aug 24 00:08:11 2005 From: lamont at gurulabs.com (Lamont R. Peterson) Date: Tue, 23 Aug 2005 18:08:11 -0600 Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: <430BB856.4040100@speakeasy.net> References: <430BB856.4040100@speakeasy.net> Message-ID: <200508231808.16129.lamont@gurulabs.com> On Tuesday 23 August 2005 05:59pm, Robert L Cochran wrote: > I finally found the courage to upgrade my MSI K8N Neo2 Platinum > (MS-7025) BIOS to version 1.9 which I believe supports AMD 64 dual-core > CPUs. I'm thinking of actually buying such a CPU. Does it makes more > sense to replace this motherboard with a new one that might be better > suited to dual-cores? Personally, I would get another board, also. Why have such a perfectly good processor lying around without any of it's cycles running? > I have 2 Gb of fairly cheap memory, but it bothers me that with every > change of a motherboard, I have to put down a large sum to get a fresh > set of memory, too. So I'd like to avoid buying a complete set of memory > if possible. I'm currently using Adata memory. I have no experience with Adata, I always use Crucial. > Am I better off staying with my current board or should I trade it for > something else? The current board works well, yes? Then keep it running. No sense "throwing away" your existing investment. > And yes, I have been reading Bryan's postings and the responses to them > with interest, but haven't studied them deeply. You and me both. To study them like I want to, I need to find some of that "free time" everybody keeps talking about. I just don't know what it looks like. -- Lamont R. Peterson Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ] -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From b.j.smith at ieee.org Wed Aug 24 01:17:06 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 23 Aug 2005 20:17:06 -0500 Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: <430BB856.4040100@speakeasy.net> References: <430BB856.4040100@speakeasy.net> Message-ID: <1124846226.4515.22.camel@bert64.mobile.smithconcepts.com> On Tue, 2005-08-23 at 19:59 -0400, Robert L Cochran wrote: > I finally found the courage to upgrade my MSI K8N Neo2 Platinum > (MS-7025) BIOS to version 1.9 which I believe supports AMD 64 dual-core > CPUs. I'm thinking of actually buying such a CPU. Does it makes more > sense to replace this motherboard with a new one that might be better > suited to dual-cores? Unless the BIOS doesn't enable it, there's virtually _no_ benefit when it comes to AMD. The package has DDR out, HyperTransport out, no change in that. > I have 2 Gb of fairly cheap memory, but it bothers me that with every > change of a motherboard, I have to put down a large sum to get a fresh > set of memory, too. So I'd like to avoid buying a complete set of memory > if possible. I'm currently using Adata memory. In Athlon 64 / Opteron, the memory timing no longer affects any other part or the system. So you can use anything, from DDR200 (PC1600) to DDR400 (PC3200). So in addition to fundamental size issues, the only restriction is Registered/Buffered for Socket-940, Unbuffered for Socket-754/939. > Am I better off staying with my current board or should I trade it for > something else? > And yes, I have been reading Bryan's postings and the responses to them > with interest, but haven't studied them deeply. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From bill at cse.ucdavis.edu Wed Aug 24 01:43:42 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 23 Aug 2005 18:43:42 -0700 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <1124430257.6072.136.camel@bert64.mobile.smithconcepts.com> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <43029ECC.3000509@harddata.com> <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> <20050819051409.GA30564@cse.ucdavis.edu> <1124430257.6072.136.camel@bert64.mobile.smithconcepts.com> Message-ID: <20050824014342.GM11528@cse.ucdavis.edu> On Fri, Aug 19, 2005 at 12:44:17AM -0500, Bryan J. Smith wrote: > On Thu, 2005-08-18 at 22:14 -0700, Bill Broadley wrote: > > Er, umm, you mean besides lsi-logic, > > The $600 LSI Logic MegaRAID 320-2E, yes, that's the only one. How about: SIIG http://www.siig.com/product.asp?catid=14&pid=1003 Promise: http://www.promise.com/product/product_detail_eng.asp?segment=RAID%205%20HBAs&product_id=156 Tekram: http://www.pc-pitstop.com/sata_raid_controllers/arc1260.asp Intel: http://www.intel.com/design/servers/RAID/srcu42e/index.htm > Or do you mean "dumb ATA channels with a 16-bit > only BIOS" (and 100% driver RAID) aka what I call "FRAID" (fake RAID)? Er, I've no idea what you are describing? You are having some strange problems with 16-bit BIOS with pci-express cards? This is relevent to building a 64 bit fileserver how? Your claiming the cards I'm mentioning don't work? > FRAID is not feasible with Linux, it never will be. It's better to use > the "dumb ATA" channels with MD/LVM. But I'd much rather have an > intelligent, hardware RAID solution. Why? What is wrong with MD? It performs better than the hardware RAID controllers I've tried, provides a standard interface, and actually allows me to migrate RAIDs between different machines and controllers in face of failure. I also like adding a single line to the crontab and I get email whenever the RAID changes state. > > intel, > > Product? Or do you mean the IOP332 XScale microcontroller in use by the > LSI Logic card? Or another product? URL above. > > SIIG, > > Let me guess, High Point Technologies (HPT) or Silicon Image "FRAID"? SI, so? > > megaraid, > > That's LSI Logic. Don't reuse the product as a vendor. My fault, I had so many hits I was pulling stuff out of the summary in the search results. > > tekram, > > Product? Or HPT/SilImage FRAID? IOP332, url above. There's a whole family of controllers from 4 to 24 ports, optional memory upgrades and battery modules for memory backup. > Products? I've seen a _lot_ of "dumb ATA" channels that do FRAID. > Nothing intelligent. What is wrong with the IOP331, 332, and 333 family and the host of controllers based on it? > > You might want to check some obscure places like buy.com, amazon.com, or > > newegg.com > > Everything I've seen is FRAID -- "dumb ATA" with 0 intelligence on- > board. I would be interested if you've found something else though. See above. > > Er, there are plenty of cheap solutions out there, there are pci-e sata > > cards starting at $80 ish with the silicon image chipset on it. > > FRAID. That's _not_ hardware RAID. I don't recall the discussion being about hardware RAID, just RAID. Are there some hardware RAID advantages I'm not aware of? > Yes, and use software RAID. I want intelligent, hardware RAID. > I've got *1* product choice, the LSI Logic MegaRAID 320-2E. Or the ones mentioned above, I didn't both to list all in each family, I found 2,4,8,12,16 and 24 port versions. > I want intelligent, hardware RAID in PCIe. I know only 1 product. > If you know more, please tell me. But don't tell me about FRAID. Your choice, see above. For smart controllers intel, promise, tekram, and lsi seem to offer fine cards for building a pci-e based server for. Wasn't that the whole point? > > Yeah, definitely although many of the hardware raid cards can't manage > > faster than legacy PCI bandwidth anyways. > > Then you've never used a 3Ware Escalade 7000/8000 64-bit ASIC product or > an LSI Logic MegaRAID "X" series Intel XScale microcontroller product. Please post RAID-5 numbers using a 7000/8000 controller showing greater than 130 MB/sec bandwidth. > Besides, pushing _all_ I/O over a _single_, _shared_, _legacy_ was _not_ > viable back in 2000, let alone now (5 years later). Prove it, please post numbers. You claim that it's often the I/O bus that is the bottleneck, I claim with hardware raid it's usually the hardware RAID controller itself that is the issue. I'm open to counter examples. I'd suggest bonnie++ or iozone if you want a benchmark to measure bandwidth. > I used to specialize in ripping out i440BX/GX chipset mainboards and > putting in ServerWorks ServerSet IIILE and IIIHE chipset mainboards for > $500-750 and companies would instantly see 3x the performance with their > _existing_ RAID controllers and GbE cards. And that was circa 2000. Sure, and I've seen 3x improvement ditching multi $k hardware raid controllers and using software raid on a cheap controller. In fact when Dell was pushed as to why they couldn't manage the write to a 50MB/sec specification on a rather expensive disk array they admitted they used software RAID internally. To meet the spec they refunded the price of the expensive hardware RAID controller and replaced it with a "FRAID". I'm open to using (and paying for) hardware RAID if it gave me more performance in exchange for learning the custom command line, web interface, serial interface, java interface, BIOS interface, or even the occasional windows only client that is required to configure, reconfigure, and/or monitor the RAID. In my experience this hasn't been the case, I'm more to open to counter examples though. -- Bill Broadley Computational Science and Engineering UC Davis From garmanyj at proxitec.com Wed Aug 24 01:50:14 2005 From: garmanyj at proxitec.com (John Garmany) Date: Tue, 23 Aug 2005 21:50:14 -0400 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050824014342.GM11528@cse.ucdavis.edu> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <43029ECC.3000509@harddata.com> <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> <20050819051409.GA30564@cse.ucdavis.edu> <1124430257.6072.136.camel@bert64.mobile.smithconcepts.com> <20050824014342.GM11528@cse.ucdavis.edu> Message-ID: <430BD256.8050900@proxitec.com> Bill Broadley wrote: I also like adding a single line to the crontab > and I get email whenever the RAID changes state. > > Bill, Intresting, I use MD for RAID on three systems from simple mirroring to RAID5. I am very intrested in the crontab that will email when MD status changes. I use Big Brother to monitor my systems but always worry about a drive failing and not getting notified. What monitor for MD are you using? John From bill at cse.ucdavis.edu Wed Aug 24 01:58:40 2005 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 23 Aug 2005 18:58:40 -0700 Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: <1124846226.4515.22.camel@bert64.mobile.smithconcepts.com> References: <430BB856.4040100@speakeasy.net> <1124846226.4515.22.camel@bert64.mobile.smithconcepts.com> Message-ID: <20050824015840.GO11528@cse.ucdavis.edu> On Tue, Aug 23, 2005 at 08:17:06PM -0500, Bryan J. Smith wrote: > On Tue, 2005-08-23 at 19:59 -0400, Robert L Cochran wrote: > > I finally found the courage to upgrade my MSI K8N Neo2 Platinum > > (MS-7025) BIOS to version 1.9 which I believe supports AMD 64 dual-core > > CPUs. I'm thinking of actually buying such a CPU. Does it makes more > > sense to replace this motherboard with a new one that might be better > > suited to dual-cores? > > Unless the BIOS doesn't enable it, there's virtually _no_ benefit when > it comes to AMD. The package has DDR out, HyperTransport out, no change > in that. Agreed. > In Athlon 64 / Opteron, the memory timing no longer affects any other > part or the system. So you can use anything, from DDR200 (PC1600) to > DDR400 (PC3200). So in addition to fundamental size issues, the only > restriction is Registered/Buffered for Socket-940, Unbuffered for > Socket-754/939. Correct, the dimms are directly attached to the CPU, but of course PC1600 will result in a system with a slower memory bus than PC3200. Thus if you have PC1600 you will get a smaller increase in performance with a X2 processor than you would if you had PC3200 since more often (statistically) you will be bottlenecked by the PC1600 memory and thus deriving no benefit from your second CPU. Personally I'm not ready to buy an X2 for home yet, I have PC3200 memory and a AMD64 3200, it's plenty fast for what I use it for and $380 for a x2 3800 (or more for faster) is a fair amount of cash for something that won't run a single thread any faster than my current machine. Not that I don't sometimes have a 2nd thread running, it's just not at the $380 pain threshold yet. So at some point it's better to hold off and hit whatever knee in the price/performance curve you want with a new CPU+Motherboard and maybe RAM combination. Rumors claim DDR2 for AMD64 in 2006, possibly linked to the rumored 1200+ pin socket and onchip pci-e. It should be an interesting platform and will be the earliest time that I revisit an upgrade. I try to wait till what I can afford doubles end user performance, so far: 286 6 Mhz 386sx 16 Mhz 386dx 40 Mhz P5 133 Mhz P6 200 Mhz celeron 450 Mhz Athlon 1200 Mhz Amd64 2000 Mhz I suspect it's going to be awhile before I can find a machine twice as fast as my current for a reasonable (to me) amount of money. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Wed Aug 24 03:16:30 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 23 Aug 2005 22:16:30 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <20050824014342.GM11528@cse.ucdavis.edu> References: <20050816233818.12964.qmail@web34106.mail.mud.yahoo.com> <43029ECC.3000509@harddata.com> <1124416119.6072.33.camel@bert64.mobile.smithconcepts.com> <20050819051409.GA30564@cse.ucdavis.edu> <1124430257.6072.136.camel@bert64.mobile.smithconcepts.com> <20050824014342.GM11528@cse.ucdavis.edu> Message-ID: <1124853390.4515.131.camel@bert64.mobile.smithconcepts.com> On Tue, 2005-08-23 at 18:43 -0700, Bill Broadley wrote: > How about: > SIIG http://www.siig.com/product.asp?catid=14&pid=1003 Still FRAID > Promise: http://www.promise.com/product/product_detail_eng.asp?segment=RAID%205%20HBAs&product_id=156 Ahhh, now that's better. I didn't know about it. IOP333-based. Still PCIe-to-PCI-X bridge, but better at sub-$400. > Tekram: http://www.pc-pitstop.com/sata_raid_controllers/arc1260.asp $500-600, but still an option. > Intel: http://www.intel.com/design/servers/RAID/srcu42e/index.htm $650+! Same IOP332 design as the 320-2E in essence. At least the Promise card is sub-$400. Thanx for pointing it out. Now I have a new option. > Er, I've no idea what you are describing? You are having some strange > problems with 16-bit BIOS with pci-express cards? Yeah, like the fact that their RAID function does _not_ work once the OS boots. That's because the software RAID logic is 100% proprietary (licensed from a 3rd party), so it's not GPL. And the vendor typically releases only a binary version for a specific kernel. Have you actually tried these cards in Linux with their FRAID logic? > This is relevent to building a 64 bit fileserver how? > Your claiming the cards I'm mentioning don't work? The FRAID cards don't, period. The reverse engineered GPL drivers (hptraid.c, pdcraid.c , silraid.c) using the GPL (ataraid.c) logic are buggy. They are also much slower than LVM/MD. > Why? What is wrong with MD? Nothing. I said _FRAID_ (fake RAID). > It performs better than the hardware RAID controllers I've tried, RAID-0? Of course! RAID-1 ... now you're starting to put 2x over your busses. RAID-5? Please! Have you seen the overhead of pushing every single bit up your memory-CPU interconnect? It's not the XOR that gets you, it's the interconnect overload. I invite you to read up on how poor software RAID-5 performs, especially during a rebuild. > provides a standard interface, and actually allows me to migrate RAIDs > between different machines and controllers in face of failure. Again, that is a commonly used, but very _poor_ "real world" _exaggeration_! You have differences in LVM/MD between kernels, not to mention a _lot_ of bugs! I have not had this so-called "bliss" with LVM/MD you have. I have run into complete lack of support of my LVD/MD volumes in moving systems of any significant age. Again, if you stick with LSI and, especially, 3Ware, they tend to maintain full volume compatibility upward. Far, far better than what the Linux kernel's LVM/MD have had -- especially versus 3Ware. > I also like adding a single line to the crontab and I get email > whenever the RAID changes state. You obviously haven't used 3Ware's 3DM2 (or prior 3DM). You really should have some experience before making assumptions. > What is wrong with the IOP331, 332, and 333 family and the host of > controllers based on it? Well, for starters, the IOP331 is _PCI-X_, not PCIe. But yes, I _was_ aware of the IOP332. Now they have an IOP333. I appreciate the link on the Promise card, it's the first sub-$400 (or even sub-$500 for that matter) card I've seen with an XScale and SATA > See above. And I noted above. Again, thanx for 3 of the 4 products. > I don't recall the discussion being about hardware RAID, just RAID. > Are there some hardware RAID advantages I'm not aware of? FRAID is _worse_ than LVM/MD. You're better off using LVM/MD. > Your choice, see above. For smart controllers intel, promise, tekram, > and lsi seem to offer fine cards for building a pci-e based server for. > Wasn't that the whole point? Yes, and I appreciate you pointing them out. Especially the Promise product. > Please post RAID-5 numbers using a 7000/8000 controller showing greater > than 130 MB/sec bandwidth. At reads? Oh, you bet! Now you show me software RAID-5 doing sustained writes (that means pushing through _significantly_more_ than memory so it is true disk I/O) over 100MBps without sending the I/O interconnect right into useless. That's the problem with software RAID-5 -- you're pushing every single byte into the CPU's interconnect, rendering your system completely tied up. > Prove it, please post numbers. Do you even know how software RAID-3/5/5 works in software? Versus a system where the RAID-5 is done on-board? Dude, read up on some things yourself. > You claim that it's often the I/O bus that is the bottleneck, I claim with > hardware raid it's usually the hardware RAID controller itself that is > the issue. On old i960 designs? Of course! But on various ASIC and XScale designs? I think not! Otherwise, why wouldn't we use PCs as switches and routers instead of devices with intelligent switch fabrics? You obviously don't know the first thing on how software-driven storage works on a generic PC versus an I/O processor. > I'm open to counter examples. I'd suggest bonnie++ or iozone if you\ > want a benchmark to measure bandwidth. It's more than that. You have to measure the I/O being eaten up by the extra software operations. If you're bogging down your system with redundant I/O just to support the operation, that doesn't show on your CPU. But it does show up on your performance. Dude, I've built too many file and database servers to even continue this. > Sure, and I've seen 3x improvement ditching multi $k hardware raid > controllers and using software raid on a cheap controller. Depends on how crappy the controller is. > In fact when > Dell was pushed as to why they couldn't manage the write to a 50MB/sec > specification on a rather expensive disk array they admitted they used > software RAID internally. This is because they are using an _old_ i960 that can't break 50-60MBps. Don't compare 10 year-old i960 designs that Dell _stupidly_ ships to 3Ware's ASIC or newer XScale processors. I can't help it if Dell is stupid. > To meet the spec they refunded the price of > the expensive hardware RAID controller and replaced it with a "FRAID". Dell is stupid, not my fault. The first thing I tell people is that its _better_ to use LVM/MD than to use an old i960 or IOP30x-based design. If you don't at least have an ASIC designed specifically for storage, or a newer microcontroller like an Intel XScale (IOP31x+ -- typically IOP33x these days), then LVM/MD _is_ better. Again, you obviously aren't "current" on ASIC/microcontroller developments over the last 5 years. > I'm open to using (and paying for) hardware RAID if it gave me more > performance in exchange for learning the custom command line, web > interface, serial interface, java interface, BIOS interface, or even the > occasional windows only client that is required to configure, reconfigure, > and/or monitor the RAID. In my experience this hasn't been the case, Because you've been deploying i960/IOP30x designs. I don't blame you for assuming incorrectly. > I'm more to open to counter examples though. Actually, 2 of those 4 you send me were IOP32x/33x designs and quite good. Another one might be an IOP32x/33x as well. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Wed Aug 24 03:23:24 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Tue, 23 Aug 2005 22:23:24 -0500 Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: <20050824015840.GO11528@cse.ucdavis.edu> References: <430BB856.4040100@speakeasy.net> <1124846226.4515.22.camel@bert64.mobile.smithconcepts.com> <20050824015840.GO11528@cse.ucdavis.edu> Message-ID: <1124853804.4515.139.camel@bert64.mobile.smithconcepts.com> On Tue, 2005-08-23 at 18:58 -0700, Bill Broadley wrote: > Correct, the dimms are directly attached to the CPU, but of course > PC1600 will result in a system with a slower memory bus than PC3200. > Thus if you have PC1600 you will get a smaller increase in performance > with a X2 processor than you would if you had PC3200 Correct, but no different than a single CPU. Of course, from an I/O perspective, it doesn't matter. In fact, you'd be better off with 2x single core CPUs for I/O. > since more often (statistically) you will be bottlenecked by the PC1600 > memory and thus deriving no benefit from your second CPU. Huh? How does memory affect dual-core versus single-core performance? Single core and dual-cores still have 2 DDR channels. A dual-core does not allow better usage of the DDR channels. A cache miss is a cache miss, and a serious performance hit _regardless_. Anytime you have to write to memory, that's bad. Anytime you have to read from memory, that's the _death_ of performance. BTW, do you even know how synchronous DRAM works? [ HINT: read latency on DRAM, even SDRAM, results in 14-40MHz performance, because DRAM is still 25-70ns on a read. ] > Personally I'm not ready to buy an X2 for home yet, I have PC3200 memory > and a AMD64 3200, it's plenty fast for what I use it for and $380 for > a x2 3800 (or more for faster) is a fair amount of cash for something > that won't run a single thread any faster than my current machine. Do you even have any Opteron systems? > Not that I don't sometimes have a 2nd thread running, it's just not at > the $380 pain threshold yet. > So at some point it's better to hold off and hit whatever knee in the > price/performance curve you want with a new CPU+Motherboard and maybe > RAM combination. Again, do you even have any Opteron systems? > Rumors claim DDR2 for AMD64 in 2006, It's not a rumor, AMD is gearing up for DDR2. > I try to wait till what I can afford doubles end user performance, > so far: > 286 6 Mhz > 386sx 16 Mhz > 386dx 40 Mhz > P5 133 Mhz > P6 200 Mhz > celeron 450 Mhz > Athlon 1200 Mhz > Amd64 2000 Mhz Again, do you even have any Opteron systems? > I suspect it's going to be awhile before I can find a machine > twice as fast as my current for a reasonable (to me) amount of > money. Again, do you even have any Opteron systems? -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Wed Aug 24 14:24:21 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 24 Aug 2005 09:24:21 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: References: Message-ID: <1124893461.4772.40.camel@bert64.mobile.smithconcepts.com> On Wed, 2005-08-24 at 09:57 -0400, Mark Hahn wrote: > good god, that kind of condescension only comes from *knowing* you're wrong! > I, like Bill, have never seen HW raid come anywhere close to the performance > of SW raid on the same hardware. yes, raid5, yes raid6, yes, even during rebuilds. > (though why you'd be worried about performance during rebuilds is beyond me - > lots of failures?) Then assume I'm wrong. You guys must have been still using i960 instead of StrongARM or, more recently, X-Scale over the last 5 years. > and HW raid is flawless, I suppose. _Select_ hardware RAID has been, with exceptions. Same deal with LVM/MD, it's worked for some things, with exceptions. > jeez, it's all software raid - The different is that I'm not pushing the data streams through my CPU- memory interconnect. That ties up important I/O. Heck, it's why you don't see your CPU states busy because nothing can be feed to it. The interconnect is saturated. > you just prefer to trust software that's closed, difficult to upgrade, > written by unknown persons and not auditable. I trust specific vendors. Adaptec? Hell no. DPT before Adaptec? No, i960. Intel? Been a little late on using their own products (sadly enough). Mylex? Yes. LSI Logic? Yes. 3Ware? Yes. > I prefer to run open, auditable, trivially fixable raid that happens to > scale very nicely with host speed. Then you obviously haven't used 3Ware. Don't throw all vendors in the same boat. Furthermore, don't assume there isn't auditing. Again, I think you're oversimplifying many hardware solutions. I've used MD, LVM and, more recently but only to a limited extent, LVM2. I often use LVM/LVM2 to RAID-0 stripe multiple hardware RAID volumes over multiple card/busses. I like LVM, don't get me wrong. And I _do_ use LVM when I don't have a good, intelligent, high- performing hardware RAID solution. That includes i960 designs that moronic Tier-1 vendors like Dell still ship. But when you have a 500MHz XScale with an internal interconnect that ripples through I/O far better than a 3GHz CPU, and you aren't making several redundant copies of the data in your host system as a result, then it's faster and more efficient. Same deal for MIPS, ARM and other ASIC designs where you're doing RAID-0, 1, 10/1e, etc... In fact, the new SAS controllers I've been reading up on give you RAID-0, 1, 10/1e for free because it's so easy to do in an ASIC (and cuts your transfers over the interconnect by 2x in the case of RAID-1). > non sequitur, since Bill didn't claim anything about 3dm's features. I'm just saying that there _are_ lots of capabilities you guys haven't used. I've used _both_ MD/LVM _and_ newer storage designs. > the ironic thing about spending $500 on a so-called HW raid card is that > you've *not* spent $400 on more disks. really, there's no reason not to > go further and do HW raid with SCSI disks to really minimize your TB. Honestly, with all the changes in MD/LVM I've seen over the years, 3Ware and Mylex/LSI Logic have a far better track record with me. > you must be running on PII's or something. even my entry-level servers have > plenty of memory and bus bandwidth to handle SW raid5. Then why don't you use them as Ethernet switches, routers, etc... too? I think you're missing the point. The purpose of your non-I/O interconnect is not to be replicating data operations. It is to be servicing requests for that data. When you start taxing your non-I/O interconnect with added data streams, that takes away it's ability to do other things. > but really, why are > you worried about this? it's a frickin fileserver! you're worried it's > going to impinge on your seti at home ranking? Dude, you didn't understand the first thing I said. I said that added RAID-4/5/6 overhead severely cuts into how much data your fileserver can flop around. Instead of the non-I/O interconnect working on requested data, it is now replicating basic I/O operations as well. You can't get that kind of information from CPU states in Linux. Heck, even Linux's iostat is kinda immature. That's because Linux is just getting into not only the processor affinity of memory, but also I/O, in the Opteron. I've often found Solaris to be much better at tracking such interconnect overhead and usage, and it is sizable when you're going from _0_% non- I/O interconnect to doubling or even tripling what the normal service streams are using. This is fundamental system design gentlemen. > how embarassing for you to so automatically assume ignorance. Fine by me. > because it's more cost-effective in either high-end or high-volume products. Give me a break. Not at 5+ disks. Not at all. Especially when I'm turning the system architecture into a data pipe for _non_ user services. > plonk. Again, fine by me. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Wed Aug 24 14:25:10 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 24 Aug 2005 09:25:10 -0500 Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: References: Message-ID: <1124893510.4772.42.camel@bert64.mobile.smithconcepts.com> On Wed, 2005-08-24 at 10:00 -0400, Mark Hahn wrote: > kid, grow up. Bill has opterons. Then he doesn't know the first thing about using them efficiently. BTW, again, I _do_ use LVM for RAID-0 with hardware RAID. I span the volumes over multiple PCI-X channels on the same Opteron for I/O-processor affinity. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From hahn at physics.mcmaster.ca Wed Aug 24 14:33:43 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 24 Aug 2005 10:33:43 -0400 (EDT) Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <1124893461.4772.40.camel@bert64.mobile.smithconcepts.com> Message-ID: > Then you obviously haven't used 3Ware. actually I have. it works OK, not fast, but stable. > But when you have a 500MHz XScale with an internal interconnect that > ripples through I/O far better than a 3GHz CPU, and you aren't making OK, show me the numbers. show me the 12 GB/s memory interface on your XScale card, show me the r5 engine that can run at 8 GB/s. certainly possible, but I don't think they exist. and if they did, who would use them? even high-end scsi raid cards with multiple channels and BB-cache seem to run at only a couple hundred MB/s. and there aren't really that many systems that can sustain that kind of throughput. (I have a $9M cluster arriving next month that will sustain about 3 GB/s - 12 object storage servers over quadrics...) > I've often found Solaris to be much better at tracking such interconnect ah, no wonder. I had the feeling you were actually pining away for something other than linux - this used to happen a lot with *BSDers slumming on linux lists. From hahn at physics.mcmaster.ca Wed Aug 24 14:38:43 2005 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 24 Aug 2005 10:38:43 -0400 (EDT) Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: <1124893510.4772.42.camel@bert64.mobile.smithconcepts.com> Message-ID: > BTW, again, I _do_ use LVM for RAID-0 with hardware RAID. actually, I measured this recently, and DM-raid0 is noticably slower than MD-raid0. I was quite surprised, since raid0 is hard to get wrong. (I used the same stripe/chunk sizes, and did large-chunk streaming reads.) > I span the volumes over multiple PCI-X channels on the same Opteron for > I/O-processor affinity. OK, so what difference do you measure between doing this (affinity vs not)? and besides, why is the opteron on the hot path? you're presumably just dma'ing big chunks of ram to/from disk, so why does 20ns inter-opteron latency make any difference? besides, how do you control the numa affinity of the pagecache? besides, how could it matter, given that your IO is so vastly slower than your memory? From b.j.smith at ieee.org Wed Aug 24 14:51:45 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 24 Aug 2005 09:51:45 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: References: Message-ID: <1124895105.4772.67.camel@bert64.mobile.smithconcepts.com> On Wed, 2005-08-24 at 10:33 -0400, Mark Hahn wrote: > actually I have. it works OK, not fast, but stable. Please detail? Other than the mistake they did of backporting RAID-5 to the 6000 series (it was never designed for it, only the 7000+), I've had 0 issues. The 5000/6000 series were designed for RAID-0/1 and most excellent. The 7000/8000 series of the last 5 years has been rock solid. The new 9000 series has only had a few performance bugs, but the 9.1.x firmware works most excellent for many, and has since shortly after it came out. And I've been able to take my volumes from the 5000 up to any new series. I've had *0* volume incompatibility over 7 years. > OK, show me the numbers. show me the 12 GB/s memory interface on your > XScale card, show me the r5 engine that can run at 8 GB/s. Do you even understand what you are talking about? I/O Processors have different interconnect meshes than CPUs. You want to talk about DTRs over _single_ points. And, furthermore, where do you get 12GBps memory interface? Socket-940 only has 6.4GBps, maximum. And as far as XOR, the problem isn't the CPU time. It's the fact that you're first pushing every single piece of data in a write through the CPU. > certainly possible, but I don't think they exist. and if they did, who > would use them? even high-end scsi raid cards with multiple channels > and BB-cache seem to run at only a couple hundred MB/s. and there > aren't really that many systems that can sustain that kind of > throughput. (I have a $9M cluster arriving next month that will > sustain about 3 GB/s - 12 object storage servers over quadrics...) Talk about cost! Dude, are we even on the same wavelength? If you have dedicated systems doing your software RAID _separate_ from your user services, then yes, software RAID _does_ work well! But then you're talking 2x system cost! Don't call me out on cost if you're introducing segmented end-storage and end-systems! ;-> > ah, no wonder. I had the feeling you were actually pining away for something > other than linux - this used to happen a lot with *BSDers slumming on linux > lists. Oh get off it. Bigot. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Wed Aug 24 14:56:40 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 24 Aug 2005 09:56:40 -0500 Subject: MSI K8N Neo2 For Dual Core? In-Reply-To: References: Message-ID: <1124895400.4772.73.camel@bert64.mobile.smithconcepts.com> On Wed, 2005-08-24 at 10:38 -0400, Mark Hahn wrote: > OK, so what difference do you measure between doing this (affinity vs not)? It's basically impossible with the current state of Linux. Linux has grown up on a single point of memory-I/O interconnect. I'm hopeful some of the Opteron developments I've seen will change that in the near future. > and besides, why is the opteron on the hot path? you're presumably just > dma'ing big chunks of ram to/from disk, so why does 20ns inter-opteron > latency make any difference? Again, it seems you don't know the first thing about AMD Opteron (let alone many RISC implementations) versus Intel Xeon or Itanium. Your memory mapped I/O is local to a processor, which can have the same affinity as the end-user services. More than just process affinity, I/O affinity. Now you're truly minimize the duplication of data streams over the interconnect. > besides, how do you control the numa affinity of the pagecache? The pagecache is _per_CPU_ in Opteron for I/O, not on the chipset like Intel AGTL+. > besides, how could it matter, given that your IO is so vastly slower > than your memory? Not when you start dealing with multiple cards on segmented PCI-X busses (over 1GBps), let alone new HTX solutions including Infiniband. Now there's a good solution for segmented end-storage and end-system, using software RAID on the end-storage. You then use HTX Infiniband to connect them. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman From b.j.smith at ieee.org Wed Aug 24 15:09:16 2005 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 24 Aug 2005 10:09:16 -0500 Subject: Fedora SMP dual core, dual AMD 64 processor system In-Reply-To: <1124895105.4772.67.camel@bert64.mobile.smithconcepts.com> References: <1124895105.4772.67.camel@bert64.mobile.smithconcepts.com> Message-ID: <1124896156.4772.79.camel@bert64.mobile.smithconcepts.com> On Wed, 2005-08-24 at 10:33 -0400, Mark Hahn wrote: > ah, no wonder. I had the feeling you were actually pining away for something > other than linux - this used to happen a lot with *BSDers slumming on linux > lists. On Wed, 2005-08-24 at 09:51 -0500, Bryan J. Smith wrote: > Oh get off it. Bigot. I meant, don't be a bigot. Don't turn it into an OS conspiracy theory. I'm not doing that here, and don't relate me to _some_ BSD people. Just because I keep myself atop of many UNIX developments doesn't mean I prefer anything but Linux. Heck, if you want to demonize me, just pick up Samba Unleashed and complain why I largely wrote non-Linux sections on BSD and Solaris (i.e., the Linux sections had largely been done when I was offered). Heck, I'm regularly demonized as pro-BSD, pro-Linux, pro-Microsoft, pro- Red Hat, pro-Gentoo, pro-Debian, pro-Sun as well as anti of all those and more. Why? Because people want to make blind alignments whereas I _refuse_ to do so. So if you want to make such bigotry statements, then I need not even discuss. With that said, I only said that Solaris lets you monitor the interconnect better. I never said Solaris is better in general. Yes, recent benchmarks in LJ and others have shown that for the _Opteron_, Solaris has been used to segmented memory/IO on SPARC and has some benefits that Linux focus has not been directed towards. But that's another detail, and one I don't think Linux will have a problem exceeding in the short term as Opteron systems are more commonplace. -- Bryan J. Smith b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------------------------- The best things in life are NOT free - which is why life is easiest if you save all the bills until you can share them with the perfect woman