From mmiles at alacritech.com Fri Feb 10 13:25:44 2006 From: mmiles at alacritech.com (Michael Miles) Date: Fri, 10 Feb 2006 08:25:44 -0500 Subject: Won't install 64-bit version Message-ID: <200602101325.k1ADPjHN007652@smtp.alacritech.com> I have a brand new multiprocessor AMD Opteron system and I purchased a copy of RH Enterprise Linux Workstation v4 which is supposed to support AMD64 processors. It does install on the system, but it is definitely not 64-bit and when I try to compile a x86_64 kernel on this system it won't compile apparently because all the libraries including gcc are 32-bit. Does RedHat have a 64-bit version of RHEL that they just don't show as purchaseable on their web-site? Thanks for any information that you can provide and let me know if anyone needs more information, I tried to keep it simple. Michael Miles -------------- next part -------------- An HTML attachment was scrubbed... URL: From berryja at gmail.com Fri Feb 10 14:49:13 2006 From: berryja at gmail.com (Jonathan Berry) Date: Fri, 10 Feb 2006 08:49:13 -0600 Subject: Won't install 64-bit version In-Reply-To: <200602101325.k1ADPjHN007652@smtp.alacritech.com> References: <200602101325.k1ADPjHN007652@smtp.alacritech.com> Message-ID: <8767947e0602100649n17c5c91elc33169c8f6d0e715@mail.gmail.com> On 2/10/06, Michael Miles wrote: > I have a brand new multiprocessor AMD Opteron system and I purchased a copy > of RH Enterprise Linux Workstation v4 which is supposed to support AMD64 > processors. It does install on the system, but it is definitely not 64-bit > and when I try to compile a x86_64 kernel on this system it won't compile > apparently because all the libraries including gcc are 32-bit. $ uname -a will tell you for sure what kernel arch you are running (along with some other stuff). What exactly are you doing that won't work? > Does RedHat have a 64-bit version of RHEL that they just don't show as > purchaseable on their web-site? They certainly do have a 64-bit RHEL. Says right here under Platforms: http://www.redhat.com/en_us/USA/rhel/compare/client/ > Thanks for any information that you can provide and let me know if anyone > needs more information, I tried to keep it simple. Well, you paid for RHEL, right? Go ask Redhat; you paid for support. This list comes for free and RH can probably answer your question. No reason not to get your money's worth :-D. Of course, we'll be glad to answer what we can here. Jonathan From lamont at gurulabs.com Fri Feb 10 15:39:29 2006 From: lamont at gurulabs.com (Lamont R. Peterson) Date: Fri, 10 Feb 2006 08:39:29 -0700 Subject: Won't install 64-bit version In-Reply-To: <200602101325.k1ADPjHN007652@smtp.alacritech.com> References: <200602101325.k1ADPjHN007652@smtp.alacritech.com> Message-ID: <200602100839.33216.lamont@gurulabs.com> On Friday 10 February 2006 06:25am, Michael Miles wrote: > I have a brand new multiprocessor AMD Opteron system and I purchased a copy > of RH Enterprise Linux Workstation v4 which is supposed to support AMD64 > processors. It does install on the system, but it is definitely not > 64-bit and when I try to compile a x86_64 kernel on this system it won't > compile apparently because all the libraries including gcc are 32-bit. Sounds like you installed using the 32-bit (i386) CDs/DVD. > Does RedHat have a 64-bit version of RHEL that they just don't show as > purchaseable on their web-site? Absolutely. But, the 32-bit (i386) and 64-bit (x86_64) CD are separate. Did you download ISOs and burn your own CDs? You should have downloaded ISOs for x86_64. Both are available via RHN [ https://rhn.redhat.com/ ] if you have an entitlement (a.k.a. RHN subscription). If you had a box set of discs shipped to you, then there are both i386 and x86_64 CDs (4 for i386 and 5 for x86_64, IIRC) in the box, 1 DVD for each (with the SRPMS on the back side of the DVDs, which you do not need to install unless you need to recompile any packages) and a Documentation CD, as well. > Thanks for any information that you can provide and let me know if anyone > needs more information, I tried to keep it simple. HTH -- Lamont R. Peterson Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ] GPG Key fingerprint: F98C E31A 5C4C 834A BCAB 8CB3 F980 6C97 DC0D D409 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From b.j.smith at ieee.org Fri Feb 10 15:46:20 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 10 Feb 2006 07:46:20 -0800 (PST) Subject: Won't install 64-bit version In-Reply-To: <8767947e0602100649n17c5c91elc33169c8f6d0e715@mail.gmail.com> Message-ID: <20060210154620.74943.qmail@web34114.mail.mud.yahoo.com> Michael Miles wrote: > I have a brand new multiprocessor AMD Opteron system and I > purchased a copy of RH Enterprise Linux Workstation v4 which > is supposed to support AMD64 processors. It does install on > the system, but it is definitely not 64-bit and when I try to > compile a x86_64 kernel on this system it won't compile > apparently because all the libraries including gcc are 32-bit. Correct, you have the i686 version. You need the x86-64 version, which is a _completely_separate_ set of CDs. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From b.j.smith at ieee.org Fri Feb 10 15:50:53 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Fri, 10 Feb 2006 07:50:53 -0800 (PST) Subject: Won't install 64-bit version In-Reply-To: <20060210154620.74943.qmail@web34114.mail.mud.yahoo.com> Message-ID: <20060210155053.78434.qmail@web34113.mail.mud.yahoo.com> "Bryan J. Smith" wrote: > Correct, you have the i686 version. You need the x86-64 version, > which is a _completely_separate_ set of CDs. Let me clarify that -- they _may_ (and should?) come in the same box. But you do _not_ use the i686 (i386/x86) CD #1, you _must_ use the x86_64 (AMD64/EM64T) CD #1. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From mailing-list-amd64 at smithstone.co.uk Wed Feb 15 12:56:28 2006 From: mailing-list-amd64 at smithstone.co.uk (mailing-list-amd64 at smithstone.co.uk) Date: Wed, 15 Feb 2006 12:56:28 GMT Subject: Fwd: AMD x2 chips Message-ID: <43f324fc.e7.1010.1016479740@smithstone.co.uk> I am considering getting a AMD x2 based machine and was wondering if people could confirm that if i installed the 64bit of RHEL the system would be recognized as a dual system and not just a single core system Cheers Stephen From kalloltukai2004 at yahoo.co.in Wed Feb 15 14:12:53 2006 From: kalloltukai2004 at yahoo.co.in (kallol maji) Date: Wed, 15 Feb 2006 14:12:53 +0000 (GMT) Subject: need help! Message-ID: <20060215141253.89202.qmail@web8510.mail.in.yahoo.com> I have a machine with following configuration--- 1. AMD 64bit(3000+) processor 2. MSI RS480 motherboard 3. 80GB SATA SAMSUNG HD I am the first time linux user . I have installed Fedora Core4 X86_64 bit version. The o.s. has installed properly. But while login frame in textmode after i entered my login name and password the [localhost-login]# line comes. Here if i enter 'startx' command to get into the desktop it's giving error 'bash - unknown command not found' What should I do to enter into the desktop and how can I switch to GUI mode. Please help me ! ----------------- kallol --------------------------------- Jiyo cricket on Yahoo! India cricket Yahoo! Messenger Mobile Stay in touch with your buddies all the time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailing-list-amd64 at smithstone.co.uk Wed Feb 15 14:25:14 2006 From: mailing-list-amd64 at smithstone.co.uk (mailing-list-amd64 at smithstone.co.uk) Date: Wed, 15 Feb 2006 14:25:14 GMT Subject: need help! Message-ID: <43f339ca.12d.1192.1089504201@smithstone.co.uk> You need to check that you have X installed with Fedora core there are 3 packages that you could have installed X , Gnome , KDE any of these should have installed X for you which leads me to believe that X is not installed if you do a which startx it should report back a path [stephen at captamerica ~]$ which startx /usr/X11R6/bin/startx [stephen at captamerica ~]$ something like that if not then you need to install it. You can do a yum groupinstall "GNOME Software Development" or replace the GNOME part with "KDE (K Desktop Environment)" Maybe that will work as im using the commands on a different RH disto ----- Original Message Follows ----- From: kallol maji To: amd64-list at redhat.com Subject: need help! Date: Wed, 15 Feb 2006 14:12:53 +0000 (GMT) > I have a machine with following configuration--- > 1. AMD 64bit(3000+) processor > 2. MSI RS480 motherboard > 3. 80GB SATA SAMSUNG HD > > I am the first time linux user . I have installed Fedora > Core4 X86_64 bit version. > The o.s. has installed properly. But while login frame > in textmode after i entered my login name and password the > [localhost-login]# line comes. Here if i enter 'startx' > command to get into the desktop it's giving error 'bash - > unknown command not found' > What should I do to enter into the desktop and how can I > switch to GUI mode. > Please help me ! > > ----------------- kallol > > > --------------------------------- > Jiyo cricket on Yahoo! India cricket > Yahoo! Messenger Mobile Stay in touch with your buddies > all the time. > > -- > amd64-list mailing list > amd64-list at redhat.com > https://www.redhat.com/mailman/listinfo/amd64-list From sbathe at gmail.com Wed Feb 15 14:32:42 2006 From: sbathe at gmail.com (Saurabh Bathe) Date: Wed, 15 Feb 2006 20:02:42 +0530 Subject: Fwd: AMD x2 chips In-Reply-To: <43f324fc.e7.1010.1016479740@smithstone.co.uk> References: <43f324fc.e7.1010.1016479740@smithstone.co.uk> Message-ID: <43F33B8A.5020809@gmail.com> mailing-list-amd64 at smithstone.co.uk wrote: > I am considering getting a AMD x2 based machine and was > wondering if people could confirm that > if i installed the 64bit of RHEL the system would be > recognized as a dual system and not just a single core > system Yes, it should. Make sure that you install the latest Update. Update 6 for RHEL3 and Update 2 for RHEL 4. --saurabh From b.j.smith at ieee.org Wed Feb 15 16:38:37 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 15 Feb 2006 08:38:37 -0800 (PST) Subject: Fwd: AMD x2 chips In-Reply-To: <43f324fc.e7.1010.1016479740@smithstone.co.uk> Message-ID: <20060215163838.59907.qmail@web34108.mail.mud.yahoo.com> mailing-list-amd64 at smithstone.co.uk wrote: > I am considering getting a AMD x2 based machine and was > wondering if people could confirm that > if i installed the 64bit of RHEL the system would be > recognized as a dual system and not just a single core > system >From the standpoint of not only the APIC, but even the HyperTransport interconnect, you have _two_physical_ processors. No "fancy bridging" or anything is required for AMD Athlon64 x2 or Opteron xx5 products, the actual interconnects are to _two_different_, _physical_ ICs. In fact, because of how AMD's Hammer designs work with HyperTransport, putting the L2 cache or memory interconnect on the CPU IC is _optional_. In the newer Linux/x86-64 kernels, there's is no "separate" MP kernel at all. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From bill at cse.ucdavis.edu Wed Feb 15 17:29:47 2006 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Wed, 15 Feb 2006 09:29:47 -0800 Subject: Fwd: AMD x2 chips In-Reply-To: <20060215163838.59907.qmail@web34108.mail.mud.yahoo.com> References: <43f324fc.e7.1010.1016479740@smithstone.co.uk> <20060215163838.59907.qmail@web34108.mail.mud.yahoo.com> Message-ID: <20060215172947.GB8583@cse.ucdavis.edu> On Wed, Feb 15, 2006 at 08:38:37AM -0800, Bryan J. Smith wrote: > mailing-list-amd64 at smithstone.co.uk wrote: > > I am considering getting a AMD x2 based machine and was > > wondering if people could confirm that > > if i installed the 64bit of RHEL the system would be > > recognized as a dual system and not just a single core > > system > > >From the standpoint of not only the APIC, but even the HyperTransport > interconnect, you have _two_physical_ processors. No "fancy Not sure what the definition of physical is here. It's different than say a dual opteron single socket system. There is no hypertransport between the two cores. > bridging" or anything is required for AMD Athlon64 x2 or Opteron xx5 > products, the actual interconnects are to _two_different_, _physical_ > ICs. Nor "_two_different" or "_physical", although I suspect it's just a terminology difference. Both cores share the same die. AMD dual cores processors are on a single die, with a single connection to memory, and a single connection to hypertransport. So the motherboard sees no difference between a single core and a dual core. There is a small difference in the CPU initialization, so make sure your motherboard/BIOS mentions support for dual core AMD chips. Both cores are behind a system request interface (SRI), and a crossbar. I wouldn't expect any problems from the newer RHEL or Fedora distributions. Of course this only doubles performance (at the same clock speed) if your code is relatively cache friendly, since your sharing a single memory bus between 2 cores. From my experience even memory intensive codes often scale in the 1.5-1.8 times faster range. Worst case, of course, is 0% faster. Actually the worst I've seen is around 5-10%, even on completely memory bound code a dual core seems to manage to keep the memory bus slightly busier. -- Bill Broadley Computational Science and Engineering UC Davis From b.j.smith at ieee.org Wed Feb 15 18:08:19 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 15 Feb 2006 10:08:19 -0800 (PST) Subject: Fwd: AMD x2 chips In-Reply-To: <20060215172947.GB8583@cse.ucdavis.edu> Message-ID: <20060215180819.10520.qmail@web34105.mail.mud.yahoo.com> Bill Broadley wrote: > Not sure what the definition of physical is here. It's different > than say a dual opteron single socket system. There is no > hypertransport between the two cores. There is no Socket and HyperTransport trace on a mainboard, no. Since the A64 x2 or Opteron xx5 model uses a single socket, you only get 384 traces (128-bit memory) whereas two, physical Socket-940 sockets will give you two sets of 384 traces. In fact, some "cheap" Opteron 2xx mainboards have DDR memory channels connected to only *1* Opteron 2xx. A system diagram such as follows: But inside the A64 x2 or Opteron xx5, you have several ICs. It various on implementation, but the commonality is that not much is changed from external as glueless HyperTransport interfaces are used. That really makes life easier for AMD. I haven't researched if 1) each core gets a single DDR channel, 2) one core is wired with both DDR channels and the other core accesses it over HyperTransport or 3) neither core has memory directly attached, and accessed over HyperTransport where the actual DDR controller is at. A few enthusiast sites report #3, but I've found many enthusiast sites to be incorrect. > Nor "_two_different" or "_physical", although I suspect it's just a > terminology difference. Both cores share the same die. Same die != same IC. But yeah, it's easy to use terminology that isn't consistent or is confusing. My apologies. > AMD dual cores processors are on a single die, with a single > connection to memory, and a single connection to hypertransport. External memory and external HyperTransport, yes. But how they are connected internally, it's _not_ bridging. > So the motherboard sees no difference between a single core > and a dual core. Actually, that's how the processor makes itself appear to the APIC. That really opens up a can-o-worms in general when we start looking at how the mainboard POST sees X, how the system OS seems Y, etc... > There is a small difference in the CPU initialization, so make > sure your motherboard/BIOS mentions support for dual core AMD > chips. Yes, because the APIC needs to be setup correctly. > Both cores are behind a system request interface (SRI), and a > crossbar. But what is that crossbar using? Is it HyperTransport? From my understanding, it is -- all glueless. Or is it a localized EV6 crossbar (which can be up to 16 nodes)? I assume not, because that would add more complexity. My further question is how is the memory connected. One DDR channel to each IC/core? Both DDR channels to one IC/core? Or the memory into that crossbar and _not_ directly connected to either core? > I wouldn't expect any problems from the newer RHEL or Fedora > distributions. The APIC handles pretty much all that. The only consideration is more performance -- like processor affinity for programs and I/O. > Of course this only doubles performance (at the same clock speed) > if your code is relatively cache friendly, since your sharing a > single memory bus between 2 cores. The L1 cache is localized to each IC/core. The L2 cache seems to also be localized to each IC/core, although I haven't verified this. Again, how the memory is physically connected to each IC/core inside the package, I don't know. There's a lot of conflicting info out there. I guess it's time I just read the AMD spec sheets. > From my experience even memory intensive codes often scale in the > 1.5-1.8 times faster range. Worst case, of course, is 0% faster. > Actually the worst I've seen is around 5-10%, even on completely > memory bound code a dual core seems to manage to keep the memory > bus slightly busier. Which _would_ suggest that HyperTransport extends internally. One of the downsides of HyperTransport is that it is a broadcast approach. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From b.j.smith at ieee.org Wed Feb 15 18:15:54 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Wed, 15 Feb 2006 10:15:54 -0800 (PST) Subject: Fwd: AMD x2 chips (omission) In-Reply-To: <20060215180819.10520.qmail@web34105.mail.mud.yahoo.com> Message-ID: <20060215181554.76190.qmail@web34110.mail.mud.yahoo.com> Bryan J. Smith" wrote: > In fact, some "cheap" Opteron 2xx mainboards have DDR memory > channels connected to only *1* Opteron 2xx. A system diagram > such as follows: Oops, forgot the illustration: http://www.samag.com/documents/s=9408/sam0411b/0411b_f6.htm A couple of Tyan and other cheap, dual Socket-940 mainboards are like this. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From wam at HiWAAY.net Wed Feb 15 20:25:58 2006 From: wam at HiWAAY.net (William A. Mahaffey III) Date: Wed, 15 Feb 2006 14:25:58 -0600 Subject: Fwd: AMD x2 chips In-Reply-To: <20060215172947.GB8583@cse.ucdavis.edu> References: <43f324fc.e7.1010.1016479740@smithstone.co.uk> <20060215163838.59907.qmail@web34108.mail.mud.yahoo.com> <20060215172947.GB8583@cse.ucdavis.edu> Message-ID: <43F38E56.4040307@HiWAAY.net> Bill Broadley wrote: >Of course this only doubles performance (at the same clock speed) if your >code is relatively cache friendly, since your sharing a single memory >bus between 2 cores. From my experience even memory intensive codes >often scale in the 1.5-1.8 times faster range. Worst case, of course, >is 0% faster. Actually the worst I've seen is around 5-10%, even on >completely memory bound code a dual core seems to manage to keep the >memory bus slightly busier. > > Some of the Tyan boards described on their website seem to imply that they have 4-way interleaved RAM (Quad-channel they call it) for some of their dual-core boards, that might be the difference you mention .... $0.02, no more, no less :-). -- William A. Mahaffey III ---------------------------------------------------------------------- "The M1 Garand is without doubt the finest implement of war ever devised by man." -- Gen. George S. Patton From hahn at physics.mcmaster.ca Thu Feb 16 04:00:17 2006 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Wed, 15 Feb 2006 23:00:17 -0500 (EST) Subject: Fwd: AMD x2 chips In-Reply-To: <20060215180819.10520.qmail@web34105.mail.mud.yahoo.com> Message-ID: > But inside the A64 x2 or Opteron xx5, you have several ICs. It no, just one IC. > That really makes life easier for AMD. you mean in the trivial sense that each core is replicated? sure. > I haven't researched if 1) each core gets a single DDR channel, 2) > one core is wired with both DDR channels and the other core accesses > it over HyperTransport or 3) neither core has memory directly > attached, and accessed over HyperTransport where the actual DDR > controller is at. A few enthusiast sites report #3, but I've found > many enthusiast sites to be incorrect. none: http://multicore.amd.com/Products/AMD_Opteron_Overview.pdf the chip is based on a crossbar-like arbiter that connects HT ports, DRAM controller and the cores. the latter connect together, apparently. they might be different ports on the crossbar, but they're definitely not using HT, since a DC chip is a single HT "node" for addressing purposes. From b.j.smith at ieee.org Thu Feb 16 11:34:59 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 06:34:59 -0500 Subject: Fwd: AMD x2 chips In-Reply-To: References: Message-ID: <1140089699.5941.23.camel@bert64.oviedo.smithconcepts.com> On Wed, 2006-02-15 at 23:00 -0500, Mark Hahn wrote: > no, just one IC. One "die" -- but each CPU is an independent integrated circuit (IC). > you mean in the trivial sense that each core is replicated? sure. I meant versus Intel who has to do additional bridging inside theirs. > none: > http://multicore.amd.com/Products/AMD_Opteron_Overview.pdf > the chip is based on a crossbar-like arbiter that connects HT ports, > DRAM controller and the cores. the latter connect together, apparently. > they might be different ports on the crossbar, but they're definitely > not using HT, since a DC chip is a single HT "node" for addressing purposes. First off, that's _not_ a technical manual. Secondly, AMD _does_ reference it's "Direct Connect Architecture" and other technologies. It would _not_ surprise me if that Crossbar is _indeed_ just a HyperTransport interconnect -- or some kind of more primitive EV6 interconnect. Remember, AMD is doing multi-_board_ with HyperTransport as well. Literally "daisy chaining" HyperTransport from each 4 CPU board to another over additional HyperTransport interconnects. So what's to say they're not doing the same _inside_ each die? -- Bryan J. Smith Professional, technical annoyance mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------------------------ Overworked IT Professional #52: Your wife can only reach you via e-mail, but it is filtered out because it says ... "I Love You." From b.j.smith at ieee.org Thu Feb 16 11:36:13 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 06:36:13 -0500 Subject: Fwd: AMD x2 chips In-Reply-To: <43F38E56.4040307@HiWAAY.net> References: <43f324fc.e7.1010.1016479740@smithstone.co.uk> <20060215163838.59907.qmail@web34108.mail.mud.yahoo.com> <20060215172947.GB8583@cse.ucdavis.edu> <43F38E56.4040307@HiWAAY.net> Message-ID: <1140089773.5941.26.camel@bert64.oviedo.smithconcepts.com> On Wed, 2006-02-15 at 14:25 -0600, William A. Mahaffey III wrote: > Some of the Tyan boards described on their website seem to imply that > they have 4-way interleaved RAM (Quad-channel they call it) for some of > their dual-core boards, that might be the difference you mention .... > $0.02, no more, no less :-). Interleaving is just a performance hack to minimize latency. But it makes sense because you could interleave core requests. -- Bryan J. Smith Professional, technical annoyance mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------------------------ Overworked IT Professional #52: Your wife can only reach you via e-mail, but it is filtered out because it says ... "I Love You." From b.j.smith at ieee.org Thu Feb 16 11:40:28 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 06:40:28 -0500 Subject: Fwd: AMD x2 chips In-Reply-To: <1140089699.5941.23.camel@bert64.oviedo.smithconcepts.com> References: <1140089699.5941.23.camel@bert64.oviedo.smithconcepts.com> Message-ID: <1140090028.5941.31.camel@bert64.oviedo.smithconcepts.com> On Thu, 2006-02-16 at 06:34 -0500, Bryan J. Smith wrote: > First off, that's _not_ a technical manual ... > So what's to say they're not doing the same _inside_ each die? Here's the deal in a nutshell, I have seen _no_ technical specifics on _how_ they are doing it. All I've seen is that their "Direct Connect Architecture" is used inside of the chip, just like outside. What that means is anyone's guess. Old, "pure" EV6 was up to a 16-node crossbar -- including between CPUs, memory and I/O. HyperTransport is just a generic, glueless partial-mesh of nodes -- and you can have memory _separate_ from CPU (as well as I/O) connected by HyperTransport (although the memory controller would have to be on the "same node" as the memory -- or just a full CPU). In any case, there is _not_ the "complex bridging" of Intel's dual-core. So as long as the APIC is setup correctly (which is done with little more than BIOS update), dual-core is pretty much no different -- sans performance. Just like multiple 4 CPU boards connected together via HyperTransport. -- Bryan J. Smith Professional, technical annoyance mailto:b.j.smith at ieee.org http://thebs413.blogspot.com ------------------------------------------------------------ Overworked IT Professional #52: Your wife can only reach you via e-mail, but it is filtered out because it says ... "I Love You." From hahn at physics.mcmaster.ca Thu Feb 16 13:06:50 2006 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Thu, 16 Feb 2006 08:06:50 -0500 (EST) Subject: Fwd: AMD x2 chips In-Reply-To: <1140089699.5941.23.camel@bert64.oviedo.smithconcepts.com> Message-ID: > > no, just one IC. > > One "die" -- but each CPU is an independent integrated circuit (IC). the cores on AMD's dual-core chips are most definitely not independent ICs. > > you mean in the trivial sense that each core is replicated? sure. > > I meant versus Intel who has to do additional bridging inside theirs. the opposite is true: Intel's DC chips are much closer to being separate ICs, since they're attached with even less bridging (present two bus loads, for instance). > > none: > > http://multicore.amd.com/Products/AMD_Opteron_Overview.pdf > > the chip is based on a crossbar-like arbiter that connects HT ports, > > DRAM controller and the cores. the latter connect together, apparently. > > they might be different ports on the crossbar, but they're definitely > > not using HT, since a DC chip is a single HT "node" for addressing purposes. > > First off, that's _not_ a technical manual. it meets your level of knowlege. the diagram is the same one that AMD has presented from the very beginning, always showing core-srq-xbar. before DC, the srq seemed out-of-place. > Secondly, AMD _does_ reference it's "Direct Connect Architecture" and > other technologies. It would _not_ surprise me if that Crossbar is > _indeed_ just a HyperTransport interconnect -- or some kind of more > primitive EV6 interconnect. > > Remember, AMD is doing multi-_board_ with HyperTransport as well. > Literally "daisy chaining" HyperTransport from each 4 CPU board to > another over additional HyperTransport interconnects. HT has never had a 4-CPU limit. but perhaps you're confusing this with Newisys's Horus. > So what's to say they're not doing the same _inside_ each die? so they're going to invent a whole new HT addressing scheme just so they can do glueless HT within the chip? that makes no sense. it would be pointless and expensive generality. From b.j.smith at ieee.org Thu Feb 16 14:54:29 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 06:54:29 -0800 (PST) Subject: Fwd: AMD x2 chips In-Reply-To: Message-ID: <20060216145429.15156.qmail@web34102.mail.mud.yahoo.com> Mark Hahn wrote: > the cores on AMD's dual-core chips are most definitely not > independent ICs. So you're saying the ALUs, FPUs, L1 cache, L2 cache, etc... logic is merged between the two cores? I'm talking a self-contained integrated circuit that operates on its own (short of any power/pin-out). From my understanding, there's not change in what each core is externally, at least logically (and fundamentally electrically) from independent dies. Now if you define it in different ways, please detail, I'm curious. > the opposite is true: Intel's DC chips are much closer to being > separate ICs, since they're attached with even less bridging > (present two bus loads, for instance). Huh? How does interconnect two "front side busses" to a single memory controller hub (MCH) without bridging them _before_? > it meets your level of knowlege. Wow, that wasn't an insult. Thanx. I'm not your regular Joe IT puke, but thanx for the assumption. > the diagram is the same one that AMD has presented from the > very beginning, always showing core-srq-xbar. before DC, > the srq seemed out-of-place. > HT has never had a 4-CPU limit. When did *I* say it did? Now you're _really_ reading into things. I just said that's how it is being _commonly_implemented_ in multi-boards. A partial mesh of HyperTransport between CPUs (and I/O), and then using one or more HyperTransport links from one of the CPUs to the next board. > but perhaps you're confusing this with Newisys's Horus. No, and not IBM's new AGTL+ crossbar either. > so they're going to invent a whole new HT addressing scheme > just so they can do glueless HT within the chip? No, quite the opposite! I'm saying the addressing scheme is the _exact_same_. I'm not following your logic at all here. > that makes no sense. it would be pointless and expensive > generality. I'm not following your logic at all here. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From hahn at physics.mcmaster.ca Thu Feb 16 16:00:10 2006 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Thu, 16 Feb 2006 11:00:10 -0500 (EST) Subject: Fwd: AMD x2 chips In-Reply-To: <20060216145429.15156.qmail@web34102.mail.mud.yahoo.com> Message-ID: > > the cores on AMD's dual-core chips are most definitely not > > independent ICs. > > So you're saying the ALUs, FPUs, L1 cache, L2 cache, etc... logic is > merged between the two cores? no, that's obviously not the case - you can see it from the die photos. the cores are clearly laid out as separate blocks - that doesn't mean they're separate ICs, by any normal definition. for instance, the cores certainly don't have a pad ring and IO drivers, just to connect to the SRQ (another on-die block within the same IC.) > I'm talking a self-contained integrated circuit that operates on its > own (short of any power/pin-out). From my understanding, there's not > change in what each core is externally, at least logically (and > fundamentally electrically) from independent dies. normally, this is simply called a block. most large ICs have lots of blocks - most caches are multiple blocks, for instance. that doesn't mean that the cache can sanely be called multiple ICs. > > the opposite is true: Intel's DC chips are much closer to being > > separate ICs, since they're attached with even less bridging > > (present two bus loads, for instance). > > Huh? How does interconnect two "front side busses" to a single > memory controller hub (MCH) without bridging them _before_? at least the initial Intel DC implementation literally had the FSB extended on-die to two electrically independent cores. since they didn't have a "real" internal bus arbiter, the chip actually presented 2 bus-loads to the system FSB. > > so they're going to invent a whole new HT addressing scheme > > just so they can do glueless HT within the chip? > > No, quite the opposite! I'm saying the addressing scheme is the > _exact_same_. I'm not following your logic at all here. but you don't seem to be getting the fact that the current HT has an 8-node limit. since you can already buy an 8-socket, 16-core machine (without any sort of HT switching or bridging), it's clear that each DC socket acts as a single HT node. it simply can't somehow have additional HT address range on-chip. that's the topic: AMD's diagrams and Bill's message point out that within a single AMD DC chip, the memory, HT ports and core(s) are connected by a non-HT xbar. the cores are not separately HT-addressable and neither is the dram interface. From b.j.smith at ieee.org Thu Feb 16 19:46:37 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 11:46:37 -0800 (PST) Subject: Fwd: AMD x2 chips In-Reply-To: Message-ID: <20060216194637.4774.qmail@web34103.mail.mud.yahoo.com> Mark Hahn wrote: > no, that's obviously not the case - you can see it from the die > photos. the cores are clearly laid out as separate blocks - that > doesn't mean they're separate ICs, by any normal definition. for > instance, the cores certainly don't have a pad ring and IO drivers, > just to connect to the SRQ (another on-die block within the same > IC.) So in other words, you're talking only _packaging_ differences. I'm talking about _functional_ units here. > normally, this is simply called a block. most large ICs have > lots of blocks - most caches are multiple blocks, for instance. > that doesn't mean that the cache can sanely be called multiple > ICs. If the peripheral or other logic around the core is _separate_ -- it literally connects through some bus logic or other arbitration that is timed _differently_ than the core, then yes, I consider it a _separate_ IC. It's just on the same die, moving whatever bus or other arbitration logic inside of the die. If the peripherals are integrated as part of the core, timing and lack of other drivers (such as segmentation by a general bus), then it's the same IC AFAICT. Today's foundaries are definitely blurring the distinction, but we can play this game all day. Packaging is not how I'd differentiate between an IC and a die -- it makes the two the _exact_same_. > at least the initial Intel DC implementation literally had > the FSB extended on-die to two electrically independent cores. > since they didn't have a "real" internal bus arbiter, the chip > actually presented 2 bus-loads to the system FSB. But it was still bridging. There is no way (AFAICT) to just connect both cores directly to the FSB. > but you don't seem to be getting the fact that the current HT > has an 8-node limit. since you can already buy an 8-socket, > 16-core machine (without any sort of HT switching or bridging), > it's clear that each DC socket acts as a single HT node. it simply > can't somehow have additional HT address range on-chip. Then how do you explain more than 16 sockets using the 4 CPU boards with HyperTransport connectors between? I do _not_disagree_ with you that each DC socket acts as a single HT node from the standpoint of the socket. I'm merely saying that AMD uses HyperTransport inside of the processor, or at least some sort of switched EV6. > that's the topic: AMD's diagrams and Bill's message point out that > within a single AMD DC chip, the memory, HT ports and core(s) are > connected by a non-HT xbar. the cores are not separately > HT-addressable and neither is the dram interface. Then what is the xbar? Is it EV6? I've always wondered if that's what used to connect a single core to its memory controller and HyperTransport anyway. In any case, the changes have been _minimal_ for AMD AFAICT. The package changes _little_ with the addition of more CPU cores -- however they are switching it on-die. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From hahn at physics.mcmaster.ca Thu Feb 16 20:05:00 2006 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Thu, 16 Feb 2006 15:05:00 -0500 (EST) Subject: Fwd: AMD x2 chips In-Reply-To: <20060216194637.4774.qmail@web34103.mail.mud.yahoo.com> Message-ID: > > no, that's obviously not the case - you can see it from the die > > photos. the cores are clearly laid out as separate blocks - that > > doesn't mean they're separate ICs, by any normal definition. for > > instance, the cores certainly don't have a pad ring and IO drivers, > > just to connect to the SRQ (another on-die block within the same > > IC.) > > So in other words, you're talking only _packaging_ differences. > I'm talking about _functional_ units here. I'm talking about the physical die: the cores (and memory controller, and HT controller, and SRT and xbar) are all functional units. that doesn't make them "separate ICs". > is timed _differently_ than the core, then yes, I consider it a > _separate_ IC. It's just on the same die, moving whatever bus or OK then, you should know that the rest of the world does not use IC that way. > other arbitration logic inside of the die. then by your definition of IC, any block or functional unit is a separate IC. so the core has a "FPU" IC, for instance. or a register-file IC. you're welcome to use words this way, but don't expect people to conform. > If the peripherals are integrated as part of the core, timing and > lack of other drivers (such as segmentation by a general bus), then > it's the same IC AFAICT. this is a bizarre distinction. timing has nothing at all to do with whether something is a separate IC (for instance, double-clocked ALUs on a P4 would be separate ICs). what "general bus" means, I can only guess, but they they don't usually make any sense on-die. the original DC P4's did actually have a "general bus" on-die, which is precisely why they present two FSB loads, and received a lot of sneering. > > but you don't seem to be getting the fact that the current HT > > has an 8-node limit. since you can already buy an 8-socket, > > 16-core machine (without any sort of HT switching or bridging), > > it's clear that each DC socket acts as a single HT node. it simply > > can't somehow have additional HT address range on-chip. > > Then how do you explain more than 16 sockets using the 4 CPU boards > with HyperTransport connectors between? I'd appreciate a reference to this 16-socket system; is it somehow using the new M2 opterons? the Horus people clearly think they have something unique in their system, which exists precisely to do the extra bookkeeping to make HT think there are only 8 nodes, but still "proxy" them into a single memory address space. > node from the standpoint of the socket. I'm merely saying that AMD > uses HyperTransport inside of the processor, or at least some sort of there's absolutely no reason to think AMD uses HT "inside" the chip (presumably between the HT units, core(s) and memory controller.) it would be pointless to add that generality, complex, and more expensive. > > that's the topic: AMD's diagrams and Bill's message point out that > > within a single AMD DC chip, the memory, HT ports and core(s) are > > connected by a non-HT xbar. the cores are not separately > > HT-addressable and neither is the dram interface. > > Then what is the xbar? > Is it EV6? an xbar is a device which can switch between any of its ports. there's really no need to refer to Alpha architecture here. (I have a whole room full of EV6's, and xbar is really a logical construct, since not every possible connection can be made.) > I've always wondered if that's what used to connect a single core to > its memory controller and HyperTransport anyway. you still didn't look at AMD's diagram? the connection is core->SRQ->xbar->memctrl. > In any case, the changes have been _minimal_ for AMD AFAICT. The correct: another port on the srq. they even brag about this, how they had the foresight to design for DC in the original k8. From b.j.smith at ieee.org Thu Feb 16 23:00:51 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 15:00:51 -0800 (PST) Subject: Fwd: AMD x2 chips In-Reply-To: Message-ID: <20060216230051.45736.qmail@web34112.mail.mud.yahoo.com> Mark Hahn wrote: > then by your definition of IC, any block or functional unit is > a separate IC. so the core has a "FPU" IC, for instance. > or a register-file IC. Huh? Now I have to _strongly_disagree_. MEAGs (one-hots) and other control logic do _not_ select registers, ALU pipes, FPU pipes, etc... across different cores, _only_ in the same core. The control logic is not fetching and decoding instructions for blocks from one core for the _other_ core. Or are you now just trying to "meet [my] level of knowlege"? (more like you ass-u-me'd what my "level of knowledge" was ;-) > you're welcome to use words this way, but don't expect people > to conform. And you're over-simplifying. The crossbar between the cores is _radically_different_ than between the "blocks." The control unit of the cores do _not_ interface. You're crossing the concepts of building peripherals around a core with having two _separate_ cores. I don't know why you're doing that. I have _no_ position. I'm just trying to say that the cores are virtually _separate_ ICs. At least from the standpoint of _whatever_ the crossbar is -- which seems to be _no_different_ whether it is single _or_ dual-core (or multi-core for that matter). Which is why no special support is required, sans maybe APIC register setup and other details -- which is what a BIOS update provides. > this is a bizarre distinction. timing has nothing at all to do > with whether something is a separate IC (for instance, > double-clocked ALUs on a P4 would be separate ICs). I meant from the standpoint of _scheduling_ instructions for a pipelines. And as I stepped back a bit in this e-mail, I really meant anything driven by and integrated around the control logic. > what "general bus" means, I can only guess, but they they don't > usually make any sense on-die. I only said it because you tell me what "busses" and "arbitration logic" fits your definition. > the original DC P4's did actually have a "general bus" on-die, > which is precisely why they present two FSB loads, and received > a lot of sneering. Okay, I didn't know that. My ignorance there. > I'd appreciate a reference to this 16-socket system; is it somehow > using the new M2 opterons? the Horus people clearly think they > have something unique in their system, which exists precisely to > do the extra bookkeeping to make HT think there are only 8 nodes, > but still "proxy" them into a single memory address space. I'm talking about the modular 4-socket mainboards that can be combined into 16 (or greater) sockets. The 4-socket uses 2 HyperTransports between each CPU, and then 2-sockets have 1 HyperTransport to the next board (ignoring the HyperTransport links used for I/O). The common configuration I've seen is 4 x 4-socket mainboards. O---O+++O---O | | | | O---O O---O + + + + O---O O---O | | | | O---O+++O---O > there's absolutely no reason to think AMD uses HT "inside" the chip > (presumably between the HT units, core(s) and memory controller.) > it would be pointless to add that generality, complex, and more > expensive. Okay, then what is it? How does it appear? Inside? Outside? Furthermore, how does even the _single_ core+L1+L2 talk to the on-board memory and HyperTransport? Is that a Xbar too? Possible an internal implementation of EV6? Everything I've read suggests that there are _no_changes_ in the dual-core from the single-core. Which suggests there is the same switch used inside. > an xbar is a device which can switch between any of its ports. > there's really no need to refer to Alpha architecture here. > (I have a whole room full of EV6's, and xbar is really a logical > construct, since not every possible connection can be made.) Okay then, virtually up to a 16-node Xbar (even if not physical). The 40-bit address and other bus limitations of EV6 are readily apparent in A64/Opteron, which suggests that other than the 64-bit ALU, PAE 52-bit "Long Mode" and 8 new XMM registers, A64/Opteron is little changed at the core from Athlon and its EV6. Which suggests they moved the Xbar inside. Which is why going dual and even multicore is easy. And I've been searching for a _long_time_ to find out what is actually at work. > you still didn't look at AMD's diagram? the connection is > core->SRQ->xbar->memctrl. And what is the SRQ/Xbar? > correct: another port on the srq. they even brag about this, how > they had the foresight to design for DC in the original k8. Which is what? Is it just a new implementation of EV6 on-die? Everything I've read suggests so. My apologies for calling it "HyperTransport." I have to speculate because this info has been hard to come by. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From b.j.smith at ieee.org Thu Feb 16 23:06:42 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 15:06:42 -0800 (PST) Subject: Fwd: AMD x2 chips (Clarification) In-Reply-To: <20060216230051.45736.qmail@web34112.mail.mud.yahoo.com> Message-ID: <20060216230643.71600.qmail@web34104.mail.mud.yahoo.com> "Bryan J. Smith" wrote: > The 40-bit address and other bus limitations of EV6 are readily > apparent in A64/Opteron, which suggests that other than the 64-bit > ALU, PAE 52-bit "Long Mode" and 8 new XMM registers, A64/Opteron is > little changed at the core from Athlon and its EV6. Let me rephrase/clarify that ... "little changed at the core from Athlon MP and its EV6 -- and especially the local AGPgarts, which are now full I/O MMUs" There is so much EV6 footprint around A64/Opteron that drawing the conclusions are unavoidable. I've been searching but I'm sure AMD isn't about to admit they reused a lot of ideas. Especially when they can market the partial-mesh interconnect of HyperTransport. I sure wish they'd just come out and say whether the Xbar and the A64/Opteron core has lineage to EV6's Xbar. That would finally explain a LOT (or at least CONFIRM it). -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From hahn at physics.mcmaster.ca Fri Feb 17 00:25:02 2006 From: hahn at physics.mcmaster.ca (Mark Hahn) Date: Thu, 16 Feb 2006 19:25:02 -0500 (EST) Subject: Fwd: AMD x2 chips In-Reply-To: <20060216230051.45736.qmail@web34112.mail.mud.yahoo.com> Message-ID: > And you're over-simplifying. The crossbar between the cores is > _radically_different_ than between the "blocks." The control unit of > the cores do _not_ interface. You're crossing the concepts of > building peripherals around a core with having two _separate_ cores. good god! look, it's simple: you claimed that the two cores on an AMD DC chip were separate ICs. this is just plain wrong, and an abuse of the well-established concept of an IC. > > I'd appreciate a reference to this 16-socket system; is it somehow > > using the new M2 opterons? the Horus people clearly think they > > have something unique in their system, which exists precisely to > > do the extra bookkeeping to make HT think there are only 8 nodes, > > but still "proxy" them into a single memory address space. > > I'm talking about the modular 4-socket mainboards that can be > combined into 16 (or greater) sockets. The 4-socket uses 2 > HyperTransports between each CPU, and then 2-sockets have 1 > HyperTransport to the next board (ignoring the HyperTransport links > used for I/O). The common configuration I've seen is 4 x 4-socket > mainboards. > > O---O+++O---O > | | | | > O---O O---O > + + > + + > O---O O---O > | | | | > O---O+++O---O I don't believe these exist in the AMD world - do you have a reference? perhaps you're thinking of the Intel world, where this design could be referring to IBM's X3 chipset, for instance, where quads are a common building block. > Furthermore, how does even the _single_ core+L1+L2 talk to the > on-board memory and HyperTransport? Is that a Xbar too? Possible an > internal implementation of EV6? sure, the diagram for a K8 is largely identical for all models, and like a some EV6-based systems, it is xbar-based. > Everything I've read suggests that there are _no_changes_ in the > dual-core from the single-core. Which suggests there is the same > switch used inside. you just aren't listening. Bill and I said that: that DC simply adds another core onto the SRQ, which itself sits on the xbar. > > an xbar is a device which can switch between any of its ports. > > there's really no need to refer to Alpha architecture here. > > (I have a whole room full of EV6's, and xbar is really a logical > > construct, since not every possible connection can be made.) > > Okay then, virtually up to a 16-node Xbar (even if not physical). now you're conflating the HT fabric with the within-K8 xbar. really, the whole point here is that they're necessarily different because the HT fabric is limited to 8 nodes, which does not count all the xbar ports, and in particular doesn't change with DC. > The 40-bit address and other bus limitations of EV6 are readily > apparent in A64/Opteron, which suggests that other than the 64-bit > ALU, PAE 52-bit "Long Mode" and 8 new XMM registers, A64/Opteron is > little changed at the core from Athlon and its EV6. this is bizarre - who are you arguing with? > Which suggests they moved the Xbar inside. > Which is why going dual and even multicore is easy. it's easy because it the cores simply talk to the SRQ. > > you still didn't look at AMD's diagram? the connection is > > core->SRQ->xbar->memctrl. > > And what is the SRQ/Xbar? System Request Queue is a unit that connects the core(s) to the xbar. the xbar is a switching network that permits all the units (SRQ, 3xHT, memctrl) to talk to each other. the xbar can't be HT-based because it connects more units in total than HT can address. > > correct: another port on the srq. they even brag about this, how > > they had the foresight to design for DC in the original k8. > > Which is what? Is it just a new implementation of EV6 on-die? > Everything I've read suggests so. sigh. I have no idea what you're pushing with all the EV6 comparisons. the EV6 was a good CPU for its time. the crossbar was actually in the typhoon chipset (ES4x). I have a roomfull of them still running. they were NOT hypertransport. From b.j.smith at ieee.org Fri Feb 17 00:32:48 2006 From: b.j.smith at ieee.org (Bryan J. Smith) Date: Thu, 16 Feb 2006 16:32:48 -0800 (PST) Subject: Fwd: AMD x2 chips In-Reply-To: Message-ID: <20060217003248.67685.qmail@web34106.mail.mud.yahoo.com> Mark Hahn wrote: > sigh. I have no idea what you're pushing with all the EV6 > comparisons. the EV6 was a good CPU for its time. the crossbar > was actually in the typhoon chipset (ES4x). I have a roomfull of > them still running. they were NOT hypertransport. I _know_ that. I've stated that I _mispoke_ in my simplification. Okay, I've admitted this. I'm just curious how the SRQ/XBar works. BTW, as another gentleman pointed out off-list, there are 30+ definitions of what comprises an "IC." I looked several up and it's a rather arbitrary term, and _several_ could fit my definition. Let's just leave this be and _end_ the "analness." If that isn't enough: You were right, I was _wrong_. I _mispoke_ on HyperTransport. But you didn't have to "talk down" to me like I don't have a background in the semiconductor industry. Don't be so ass-u-me'ing. -- Bryan J. Smith Professional, Technical Annoyance b.j.smith at ieee.org http://thebs413.blogspot.com ---------------------------------------------------- *** Speed doesn't kill, difference in speed does *** From greearb at candelatech.com Tue Feb 21 01:02:36 2006 From: greearb at candelatech.com (Ben Greear) Date: Mon, 20 Feb 2006 17:02:36 -0800 Subject: H8SSL-i: Can't get FC4-x86_64 to install. Message-ID: <43FA66AC.8000304@candelatech.com> Hello! I am trying to install FC4-x86_64 on an H8SSL-i motherboard. The boot process crashes during the initial kernel boot with a panic about not being able to find the root file system. I was able to get the 32-bit version installed using the HT1000 MMIO driver from SuperMicro's web site, but the 64-bit version does not boot far enough to let me use the driver.... FC5-rc2-x86-64 will boot, but does not appear to (fully) support the HT1000 because it complains it can find no hard drives. Has anyone been able to get this working? If so, I'd love to hear how... Thanks, Ben -- Ben Greear Candela Technologies Inc http://www.candelatech.com From DAVID.C.MCGUFFEY at saic.com Mon Feb 27 16:25:15 2006 From: DAVID.C.MCGUFFEY at saic.com (Mcguffey, David C) Date: Mon, 27 Feb 2006 11:25:15 -0500 Subject: FC on ASUS SLI Deluxe for AMD64 X2 Message-ID: <5B8C5F3796BB11489C2AB2A5EA76380B66754C@1297-b043-exs01.iisbu.saic.com> Just finished building a dual boot workstation using the ASUS SLI Deluxe, an AMD64 X2 4200, 4 GB of RAM, two 250 GB SATA in RAID 0 using the onboard Nvida raid contoller, and an ATI Radeon X850 XT PCI-E video card. Based on prior guidance to install Windows first on a dual-boot system, Win XP 64 bit is installed and running A-OK on a 250 GB partition. Next step is to install FC4 or FC5 (preferably FC5) on the remaining 250 GB of unpartitioned space. Anything I need to be aware of before I start this process? Win XP 64 required a special Nvida RAID driver as part of the install process. Will FC4/5 recognize the existing Nvida stripe RAID, or will I have to do something special during the install? Dave McGuffey Principal Information Assurance Engineer / ISSE SAIC IISBU, Columbia, MD From bdevouge at redhat.com Tue Feb 28 15:51:40 2006 From: bdevouge at redhat.com (Boris Devouge) Date: Tue, 28 Feb 2006 15:51:40 +0000 Subject: FC on ASUS SLI Deluxe for AMD64 X2 In-Reply-To: <5B8C5F3796BB11489C2AB2A5EA76380B66754C@1297-b043-exs01.iisbu.saic.com> References: <5B8C5F3796BB11489C2AB2A5EA76380B66754C@1297-b043-exs01.iisbu.saic.com> Message-ID: <4404718C.3060203@redhat.com> Mcguffey, David C wrote: > Just finished building a dual boot workstation using the ASUS SLI Deluxe, an > AMD64 X2 4200, 4 GB of RAM, two 250 GB SATA in RAID 0 using the onboard > Nvida raid contoller, and an ATI Radeon X850 XT PCI-E video card. > > Based on prior guidance to install Windows first on a dual-boot system, Win > XP 64 bit is installed and running A-OK on a 250 GB partition. > > Next step is to install FC4 or FC5 (preferably FC5) on the remaining 250 GB > of unpartitioned space. > > Anything I need to be aware of before I start this process? Win XP 64 > required a special Nvida RAID driver as part of the install process. Will > FC4/5 recognize the existing Nvida stripe RAID, or will I have to do > something special during the install? > > Dave McGuffey > Principal Information Assurance Engineer / ISSE > SAIC IISBU, Columbia, MD > As far as I am aware, the nvidia sata raid is one of those false hardware raid ( really a software raid ) and it does require a special driver ( windows only ) to pick up the logical volume information defined on the array. More information is available there: http://linuxmafia.com/faq/Hardware/sata.html#nvidia On FC/RHEL, the system will see 2 separate drives instead of the raid 0 you defined using the raid bios utility. So you can use software raid but it will hose your window installation. Hope this helps. -- Boris Devouge direct: +44 1483 739617 EMEA Pre-Sales Technical Engineer office: +44 1483 300169 Red Hat UK mobile: +44 7921 700937 10 Alan Turing Road, Surrey Research Park, Guildford GU2 7YF From eugen at leitl.org Tue Feb 28 16:15:36 2006 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 28 Feb 2006 17:15:36 +0100 Subject: budget AMD64 for a server Message-ID: <20060228161536.GO25017@leitl.org> One of my old (1.2 GHz) Athlon XP machines in the rack died, so I'm looking for a minimal-price AMD64 based server substitute. Any suggestions for something stable and cheap, reasonably performant, no-frills but for 2x GBit Ethernet onboard ports? (Latter even not strictly required). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From lamont at gurulabs.com Tue Feb 28 16:51:00 2006 From: lamont at gurulabs.com (Lamont R. Peterson) Date: Tue, 28 Feb 2006 09:51:00 -0700 Subject: FC on ASUS SLI Deluxe for AMD64 X2 In-Reply-To: <4404718C.3060203@redhat.com> References: <5B8C5F3796BB11489C2AB2A5EA76380B66754C@1297-b043-exs01.iisbu.saic.com> <4404718C.3060203@redhat.com> Message-ID: <200602280951.00651.lamont@gurulabs.com> On Tuesday 28 February 2006 08:51am, Boris Devouge wrote: > Mcguffey, David C wrote: > > Just finished building a dual boot workstation using the ASUS SLI Deluxe, > > an AMD64 X2 4200, 4 GB of RAM, two 250 GB SATA in RAID 0 using the > > onboard Nvida raid contoller, and an ATI Radeon X850 XT PCI-E video card. > > > > Based on prior guidance to install Windows first on a dual-boot system, > > Win XP 64 bit is installed and running A-OK on a 250 GB partition. > > > > Next step is to install FC4 or FC5 (preferably FC5) on the remaining 250 > > GB of unpartitioned space. > > > > Anything I need to be aware of before I start this process? Win XP 64 > > required a special Nvida RAID driver as part of the install process. > > Will FC4/5 recognize the existing Nvida stripe RAID, or will I have to do > > something special during the install? > > > > Dave McGuffey > > Principal Information Assurance Engineer / ISSE > > SAIC IISBU, Columbia, MD > > As far as I am aware, the nvidia sata raid is one of those false > hardware raid ( really a software raid ) and it does require a special > driver ( windows only ) to pick up the logical volume information > defined on the array. No, it's not windows only. I have a co-worker who has an nVidia Fake-RAID (a.k.a. FRAID) equipped motherboard who has been using the dmraid stuff in FC5 test releases. There have been some problems (see the thread: [ http://www.redhat.com/archives/rhl-devel-list/2006-February/msg01007.html ]), but it's getting closer. > More information is available there: > http://linuxmafia.com/faq/Hardware/sata.html#nvidia I didn't have time to look at this, so I have no comment on it. > On FC/RHEL, the system will see 2 separate drives instead of the raid 0 > you defined using the raid bios utility. So you can use software raid > but it will hose your window installation. I can not comment authoritatively, as I do not have one of these FRAIDs. However, I do not think this statement is accurate if you are using the dmraid support, but it probably is in the described setup. > Hope this helps. Me too. -- Lamont R. Peterson Senior Instructor Guru Labs, L.C. [ http://www.GuruLabs.com/ ] GPG Key fingerprint: F98C E31A 5C4C 834A BCAB 8CB3 F980 6C97 DC0D D409 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 191 bytes Desc: not available URL: From bill at cse.ucdavis.edu Tue Feb 28 17:01:56 2006 From: bill at cse.ucdavis.edu (Bill Broadley) Date: Tue, 28 Feb 2006 09:01:56 -0800 Subject: budget AMD64 for a server In-Reply-To: <20060228161536.GO25017@leitl.org> References: <20060228161536.GO25017@leitl.org> Message-ID: <20060228170156.GA14610@cse.ucdavis.edu> On Tue, Feb 28, 2006 at 05:15:36PM +0100, Eugen Leitl wrote: > > One of my old (1.2 GHz) Athlon XP machines in the rack died, > so I'm looking for a minimal-price AMD64 based server > substitute. Tier-1? Or not? If Tier-1, I'm reasonably happy with my $745 Sun x2100. It has a pci-e slot, I had to add a SATA disk ($50), to make it usable. It does have 2 GigE and a 3 year warranty. Pretty nice airflow, fairly short chassis (for a 1U), has redundant trivial to replace CPU fans (i.e. 5 seconds). I'm seen similar from many other vendors, but usually with pci-x instead of pci-e. -- Bill Broadley Computational Science and Engineering UC Davis From eugen at leitl.org Tue Feb 28 17:53:04 2006 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 28 Feb 2006 18:53:04 +0100 Subject: budget AMD64 for a server In-Reply-To: <20060228170156.GA14610@cse.ucdavis.edu> References: <20060228161536.GO25017@leitl.org> <20060228170156.GA14610@cse.ucdavis.edu> Message-ID: <20060228175304.GQ25017@leitl.org> On Tue, Feb 28, 2006 at 09:01:56AM -0800, Bill Broadley wrote: > Tier-1? Or not? Not. Just a vanilla ATX 939 socket board, with reasonable performance, no consumer cruft and as few movable parts as possible. The chassis is 4 U, so plenty of space there. > If Tier-1, I'm reasonably happy with my $745 Sun x2100. It has a pci-e > slot, I had to add a SATA disk ($50), to make it usable. It does have > 2 GigE and a 3 year warranty. I have a X2100 with 4 GBytes and 2x250 SATA drives in a RAID 1, running Sarge (debian amd64) and a kernel with vserver patches, not in the rack yet. Very nice machine -- don't have the budget for another one, right now. I figure 200-300 EUR budget should be enough for a motherboard and an Athlon 64 CPU. > Pretty nice airflow, fairly short chassis (for a 1U), has redundant That's a good point -- my rack is reasonably shallow, and won't fit say, a V20Z. > trivial to replace CPU fans (i.e. 5 seconds). > > I'm seen similar from many other vendors, but usually with pci-x instead > of pci-e. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: