Unconfigured Ad Widget

Collapse

Gigabit Ethernet, what speed should it be really?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • #31
    command_liner
    Senior Member
    • May 2009
    • 1175

    Run a few tests and you will find all sorts of interesting cache and window
    bottlenecks. A while back I did extensive testing on RAID systems to learn
    actual performance of the installed systems. Real performance varied by
    a factor of one thousand! Unless you have some knowledge of the
    underlying file system IO parameters, you cannot measure any real network
    traffic speeds for file copy. Note that on Windows systems, IO performance
    drops off by a factor of 100 as you approach fullness on the file system.
    Checking performance between 90% and 99% full will give you meaningless
    numbers. Even above 80% it starts to get bad.

    Use dd if=/dev/zero of=some_file bs=x count=y for a wide variety of x and y.
    Then do the inverse test of reading.
    Then do the same sort of logic with nc for a remote system -- no disks.
    Then do the same sort of test with a combo of nc and dd to test real
    io and network speeds combined.
    Finally do the 4-way test with dd and nc at both ends with and without
    real files with a variety of block sizes for the transfer at the dd level
    and the nc level, and vary the transmission prototocol.

    For comparison, use NFS and use dd for remote mounted systems.
    Then use a few tests with RAMdisks.

    All this is really easy to do on Unix, linux and OSX systems, and not so
    bad under Windows using SFU.
    What about the 19th? Can the Commerce Clause be used to make it illegal for voting women to buy shoes from another state?

    Comment

    • #32
      FluorideInMyWater
      Senior Member
      • May 2006
      • 1840

      Originally posted by bbguns44
      "cat6 is not capable of handling gigabyte traffic. u need fiber for it."

      Wrong. Cat 6 runs Gb with no problem. Any limitations are due to
      slow software.
      no, not slow software. we are talking OSI layers physical, and data-link here. application layer is way up the chain. the hardware manages the transfer of the data physically and has its own little bios installed on it.

      as far as the charting, i just looked for some data and thew it up. could be wrong. if so my appologies.

      can also depend on your read/write time on a HD. if ur operating in a server environment w/ NAS and other high-level RAID devices where you have 100's of disks, your read/write/transfer time is going to dramatically go up, but you still probably wont get the tru benefit w/out the fiber, unless you start teaming NICs together.

      in my environment i used solar-winds to do benchmark tests on data transfer thu cat5's and 6's and they just never compared anywhere close to fiber. this is of course IME.

      but just because you set ur nic to gig/full-duplex does not mean ur gonna get it, especially w/ low end produducts. if your running cisco routers and switches, then yes, but d-links, linksys (yes, owned by cisco) and others, ur not going to get it, realistically.........IMO/IME
      No longer FluorideInMyWater. (California)
      now the infamous "CalciumDepositsInMyWater" (Cancun)

      Comment

      • #33
        blisster
        Senior Member
        • Jan 2008
        • 2337

        I've got a gig-e SAN built with a simple Cisco gigabit switch and Cat5e patch cables (no link aggregation) and I reliably see throughput in the >900Mbps range between disk arrays (Netapp filers) passing NFS traffic. ISCSI is *slightly* lower but still in the >850Mbps range.


        your primary bottleneck is almost ALWAYS going to be your HDD/Storage in a traditional computing enviornment.
        Last edited by blisster; 07-17-2010, 10:33 AM.
        Originally posted by Edward Abbey
        A patriot must always be ready to defend his country against his government.
        Originally posted by W.H. Auden
        I and the public know
        What all schoolchildren learn,
        Those to whom evil is done
        Do evil in return.

        Comment

        • #34
          FluorideInMyWater
          Senior Member
          • May 2006
          • 1840

          Originally posted by blisster
          I've got a gig-e SAN built with a simple Cisco gigabit switch and Cat5e patch cables (no link aggregation) and I reliably see throughput in the >900Mbps range between disk arrays (Netapp filers) passing NFS traffic. ISCSI is *slightly* lower but still in the >850Mbps range.


          your primary bottleneck is almost ALWAYS going to be your HDD/Storage in a traditional computing enviornment.
          yah netapps rocks. i had a bunch of FAS380S i think. great thruput
          No longer FluorideInMyWater. (California)
          now the infamous "CalciumDepositsInMyWater" (Cancun)

          Comment

          • #35
            blisster
            Senior Member
            • Jan 2008
            • 2337

            Originally posted by FluorideInMyWater
            yah netapps rocks. i had a bunch of FAS380S i think. great thruput
            yeah, we've just bought into the tech and went with FAS2020's (the entry level models) with SATA arrays. We're pushing the limits of the SATA array at this point and will probably move to SAS shelves for out VM and messaging enviornment next year.
            Originally posted by Edward Abbey
            A patriot must always be ready to defend his country against his government.
            Originally posted by W.H. Auden
            I and the public know
            What all schoolchildren learn,
            Those to whom evil is done
            Do evil in return.

            Comment

            • #36
              FluorideInMyWater
              Senior Member
              • May 2006
              • 1840

              Originally posted by blisster
              yeah, we've just bought into the tech and went with FAS2020's (the entry level models) with SATA arrays. We're pushing the limits of the SATA array at this point and will probably move to SAS shelves for out VM and messaging enviornment next year.
              great idea.

              i had 10 fujitsu/seimens server w/ (8) quad core processors that ran VMware. then we ran the actual data/files on the Netapps. was amazing. but we also had 12 netapps heads and about 600 disks in all. but lemme tell ya...YEEEEHAAAAAA!!! all of them fault tollerant w/ quad fiber bundles on them. that was some serious $$$ tho
              No longer FluorideInMyWater. (California)
              now the infamous "CalciumDepositsInMyWater" (Cancun)

              Comment

              • #37
                jmlivingston
                Moderator Emeritus
                CGN Contributor - Lifetime
                • Oct 2005
                • 5095

                The shop I'm working in these days is running a variety of IBM and HP blade servers, mostly booting from Netapp SAN over 4GB FiberChannel. I'm currently working on a project to migrate this to 10Gb FCoE with Nexus 5020 switches, we're just going to skip across 8GB FC.

                We love our Netapps.

                Comment

                • #38
                  DaveInOroValley
                  CGN/CGSSA Contributor
                  CGN Contributor
                  • Jan 2010
                  • 8967

                  I have a non Windows set up and across my Giga Bit network I can copy a 6GB file in approx 4 minutes, using Cat5 and a gigabit switch.
                  NRA Life Member

                  Vet since 1978

                  "Don't bother me with facts, Son. I've already made up my mind." -Foghorn Leghorn

                  Comment

                  • #39
                    FluorideInMyWater
                    Senior Member
                    • May 2006
                    • 1840

                    Originally posted by jmlivingston
                    The shop I'm working in these days is running a variety of IBM and HP blade servers, mostly booting from Netapp SAN over 4GB FiberChannel. I'm currently working on a project to migrate this to 10Gb FCoE with Nexus 5020 switches, we're just going to skip across 8GB FC.

                    We love our Netapps.
                    yeah, netapps really got it right. but heck, its just a unix FS..........so.......not surprised.
                    No longer FluorideInMyWater. (California)
                    now the infamous "CalciumDepositsInMyWater" (Cancun)

                    Comment

                    • #40
                      jmlivingston
                      Moderator Emeritus
                      CGN Contributor - Lifetime
                      • Oct 2005
                      • 5095

                      Originally posted by daveinwoodland
                      I have a non Windows set up and across my Giga Bit network I can copy a 6GB file in approx 4 minutes, using Cat5 and a gigabit switch.
                      That's not very good network performance. Since 6GB = 6,000,000,000 bits and there's 240 seconds in 4 minutes you're getting well under 25 Mb on your transfers.
                      6,000,000,000 bits / 240 seconds = 25,000,000 bits/sec or 25 Mbit/sec

                      Some storage manufacturers insist on using 1024 in their measurements lets look at it that way. Even considering that, it's not much of an improvement in your network performance. In this situation 6GB = 6,144,000,000 bits.
                      6,144,000,000 / 240 seconds = 25,600,000 bits/sec or 25.6 Mb/sec

                      Comment

                      • #41
                        FluorideInMyWater
                        Senior Member
                        • May 2006
                        • 1840

                        Originally posted by jmlivingston
                        That's not very good network performance. Since 6GB = 6,000,000,000 bits and there's 240 seconds in 4 minutes you're getting well under 25 Mb on your transfers.
                        6,000,000,000 bits / 240 seconds = 25,000,000 bits/sec or 25 Mbit/sec

                        Some storage manufacturers insist on using 1024 in their measurements lets look at it that way. Even considering that, it's not much of an improvement in your network performance. In this situation 6GB = 6,144,000,000 bits.
                        6,144,000,000 / 240 seconds = 25,600,000 bits/sec or 25.6 Mb/sec
                        yah b/c its all binary so always use 1024 in any computer calculation.
                        No longer FluorideInMyWater. (California)
                        now the infamous "CalciumDepositsInMyWater" (Cancun)

                        Comment

                        • #42
                          odysseus
                          I need a LIFE!!
                          • Dec 2005
                          • 10407

                          Originally posted by FluorideInMyWater
                          that was some serious $$$ tho
                          Y E S.
                          "Just leave me alone, I know what to do." - Kimi Raikkonen

                          The moment the idea is admitted into society, that property is not as sacred as the laws of God, and that there is not a force of law and public justice to protect it, anarchy and tyranny commence.' and that `Property is surely a right of mankind as real as liberty.'
                          - John Adams

                          http://www.usdebtclock.org/

                          Comment

                          Working...
                          UA-8071174-1