<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
  <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Pretty much all of the fiber ones, which is about all I use, are that
way.  One good one that I've found works well with Linux is Amer.com's
C1000SX.  It is available here:<br>
<br>
<a class="moz-txt-link-freetext" href="http://www.amer.com/catalogue/ac1000sx.html">http://www.amer.com/catalogue/ac1000sx.html</a><br>
<br>
Another one is the Intel Pro/1000SX card, which is 64bit/66MHz.  These
have been around for a long time, and I have used them in many a
GNU/Linux box.  Just do a search on <a class="moz-txt-link-abbreviated" href="http://www.pricewatch.com">www.pricewatch.com</a>, using the terms
"intel gigabit fiber", and I find them reasonably priced.<br>
<br>
As for copper cards, those typically are integrated into the
motherboard nowadays, even on client box models.  That's a question
that the motherboard manufacturer should answer--on which bus, and at
what bit/speed rate, is their integrated 10/100/1000 network
interface(s)?  Tyan and MSI both are known to put the copper Gig-E
interface either on the PCI-32 bus (lower-end mobos) or, in Tyan's
case, also on the PCI-X bux (higher end, co$t$ more).  If it's the
latter, you're in good shape.  If it's the former, but you're running
your other Gig-E cards on the PCI-X bus, then other than IDE hard disk
contention (also on the PCI-32 bus, but not usually constant), you
probably will be fine.  Of course, I'd be looking at SATA or SCSI
hardware raid on PCI-X if the budget allows for it.<br>
<br>
--TP<br>
<div class="moz-signature">_______________________________
<br>
Do you GNU!?
<br>
<a href="http://www.gnu.org/">Microsoft Free since 2003</a>--the
ultimate antivirus protection!
<br>
</div>
<br>
<br>
Petre Scheie wrote:
<blockquote cite="mid45C36A05.1070808@maltzen.net" type="cite"><br>
  <br>
Terrell Prudé Jr. wrote:
  <br>
  <blockquote type="cite">Robert Arkiletian wrote:
    <br>
    <blockquote type="cite">On 1/31/07, Robert Arkiletian
<a class="moz-txt-link-rfc2396E" href="mailto:robark@gmail.com"><robark@gmail.com></a> wrote:
      <br>
      <blockquote type="cite">On 1/31/07, Petre Scheie
<a class="moz-txt-link-rfc2396E" href="mailto:petre@maltzen.net"><petre@maltzen.net></a> wrote:
        <br>
        <blockquote type="cite">Terrell Prudé Jr. wrote:
          <br>
          <blockquote type="cite">Robert Arkiletian wrote:
            <br>
            <blockquote type="cite">On 1/29/07, Joseph Bishay
<a class="moz-txt-link-rfc2396E" href="mailto:joseph.bishay@gmail.com"><joseph.bishay@gmail.com></a> wrote:
              <br>
              <blockquote type="cite">Hello,
                <br>
                <br>
I hope you are doing well.
                <br>
                <br>
Thank you all for the comprehensive reply!
                <br>
                <br>
Once I started reading your email, I realized that probably the
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
best
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">way to proceed was to work with
the idea of NIC Bonding or port
                <br>
trunking.  I have a surplus of Gigabit cards so I could put 3 in a
                <br>
server (reading online I found that more than 3 wasn't going to
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
give
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">enough of an improvement due to
the PCI bus limitations -- can
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
anyone
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">validate this?) and then send all
3 of those to the switch. I
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
could
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">then bond 3 ports from that
switch to the next one (we'll probably
                <br>
have 2 x48 gigabit switches for the whole building -- still
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
counting
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">the number of ports/computers
required) so as to deal with the
                <br>
bandwidth.  The cost of some of those fiber <-> copper converts
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
look
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">rather daunting.
                <br>
                <br>
I would VERY MUCH prefer to use only 1 server for the entire
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
building
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">-- I am still very much a novice
at this and the complexities of
                <br>
setting up multiple servers or splitting into application &
                <br>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
/home with
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <blockquote type="cite">LAPD sounds rather daunting.
                <br>
                <br>
              </blockquote>
If your still set on one server also have a look at this
              <br>
<a class="moz-txt-link-freetext" href="http://k12ltsp.org/mediawiki/index.php/Technical:Subnetting">http://k12ltsp.org/mediawiki/index.php/Technical:Subnetting</a>
              <br>
Instead of port trunking I think this would be a better idea.
              <br>
Especially if you are going to have 2 48 port switches that
              <br>
            </blockquote>
          </blockquote>
        </blockquote>
could be
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">on different gigabit linked
subnets.
              <br>
            </blockquote>
Hmm...I hadn't thought of that particular application
            <br>
          </blockquote>
        </blockquote>
myself--addressing
        <br>
        <blockquote type="cite">
          <blockquote type="cite">bandwidth bottlenecks--but you're
right, that sure would do it! </blockquote>
        </blockquote>
That
        <br>
        <blockquote type="cite">
          <blockquote type="cite">never even occurred to me...thanks!
            <br>
            <br>
--TP
            <br>
          </blockquote>
I recall reading somewhere that three gigabit cards is probably the
          <br>
        </blockquote>
max that the PCI bus
        <br>
        <blockquote type="cite">can handle.  Can anyone confirm or deny
this?
          <br>
        </blockquote>
No. A gigabit card is 1 Gibabit/s (that's 1 billion bits per second).
        <br>
Each byte is 8 bits. So  it maxs out at 125MB/s. A simple PCI bus can
        <br>
handle 133MB/s max. So 1 gigabit ethernet card can saturate a PCI bus
        <br>
      </blockquote>
Correction:
      <br>
PCI 2.2 spec is 32 bits at 66Mhz which equals 266MB/s.  So 2 gigabit
      <br>
nics should be able to saturate it. The original PCI bus was 32bits at
      <br>
33Mhz which is 133MB/s.
      <br>
      <br>
    </blockquote>
    <br>
True, but if your PCI bus is 64-bits at 66MHz (i. e. PCI-X), then
you're
    <br>
fine, as you then have 532MB/s.  I've always been sure to buy 64-bit,
    <br>
66MHz NIC's for this reason.  Same with RAID cards; PCI-X whenever
possible.
    <br>
    <br>
  </blockquote>
What brand of 64-bit NIC are you buying for this purpose?  Where do you
get them?
  <br>
  <br>
Petre
  <br>
  <br>
_______________________________________________
  <br>
K12OSN mailing list
  <br>
<a class="moz-txt-link-abbreviated" href="mailto:K12OSN@redhat.com">K12OSN@redhat.com</a>
  <br>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/k12osn">https://www.redhat.com/mailman/listinfo/k12osn</a>
  <br>
For more info see <a class="moz-txt-link-rfc2396E" href="http://www.k12os.org"><http://www.k12os.org></a>
  <br>
</blockquote>
</body>
</html>