PDA

View Full Version : Gigabit Ethernet, what speed should it be really?


virulosity
07-11-2010, 11:43 AM
I just bought a second computer, both of my machines have Gigabit network adapters. So I went and bought a 5 port gigabit switch and a pair of CAT6 cables to see what kind of transfer speeds I could get. I am only getting 11MB/s when transferring files. If I multiply by 8, this is only 88Mb/s, less than a 10/100 connection. So what gives?

armygunsmith
07-11-2010, 11:47 AM
I suppose you might be limited by the speed you drive can perform write operations.

spsellars
07-11-2010, 11:54 AM
Just because your NIC can transfer at 1Gbps doesn't mean your hard drives can. Ideally you should be at ~90% transfer speed of whatever the slowest drive in the chain is capable of.

virulosity
07-11-2010, 12:00 PM
Both machines have Sata 150 connections for hard disks, If i do a file copy on one of the machines I get over 80MB/s.

virulosity
07-11-2010, 12:04 PM
OK, I did a copy on the other machine and that is the problem, its only 11MB/s write speed. I guess that solves it. Aren't SSD's slower at writing than conventional platter HD's?

spsellars
07-11-2010, 12:26 PM
Both machines have Sata 150 connections for hard disks

Unfortunately, what the controller and cables are capable of have nothing to do with what the hard drive itself can do. (Your drive is not even coming close to saturating your SATA controller; that's why most motherboards get away with splitting that bandwidth between all drives on that controller.)

Aren't SSD's slower at writing than conventional platter HD's?

Entirely depends on which drives you're comparing, and how they're set up. Though all else being equal, a random write will generally be (considerably) slower on a Flash SSD than a random write on a platter. Newer SSD and hybrid drives alleviate that somewhat.

virulosity
07-11-2010, 12:38 PM
Its interesting to me that when these standards were designed, there is such a big margin over the performance of the drives, or are my drives just slow? I guess there are SCSI or RAID systems out there that can fill the void?

odysseus
07-11-2010, 1:15 PM
Its interesting to me that when these standards were designed, there is such a big margin over the performance of the drives, or are my drives just slow? I guess there are SCSI or RAID systems out there that can fill the void?

Gig ethernet was not derived with throughput strictly for your home network copying to your IDE (then) or SATA (now) PC drive. As you elude to, back years ago when first put into network products they were back end aggregation for 100mb switches. Also servers utilize it for multi-node access, to which not all is dependent on disk I/O, even if the disk I/O is very fast RAID striping on very fast BUS architecture. iSCSI has utilized it as well, which has fetched 10Gb and 40Gb ethernet standards to compete with gig fiber channel.

However it is much of an improvement over 100mb speeds nonetheless for the home. I have been gig-e for a long time. The issue being arrived at in the conversation is that from the nic through to the head writing on the platter, there are issues in the layers from application through physical BUS that can inhibit speed. Since 100 ether is only around 12MB/s, you will need 1000 to get to the upper limits of your PC disks I/O.

JDay
07-11-2010, 2:55 PM
Both machines have Sata 150 connections for hard disks, If i do a file copy on one of the machines I get over 80MB/s.

Those are first generation SATA drives, not exactly fast. You should upgrade to SATA-II (3 Gb/s) or SATA-III (6 Gb/s). Your drives probably don't even support NCQ (http://en.wikipedia.org/wiki/Native_command_queuing). You should also make sure you're running them in AHCI mode and not IDE emulated mode. The setting for this is in your BIOS, however if it is not already set to AHCI you'll have to reinstall your OS when you change it.

Corbin Dallas
07-11-2010, 3:01 PM
Those are first generation SATA drives, not exactly fast. You should upgrade to SATA-II (3 Gb/s) or SATA-III (6 Gb/s). Your drives probably don't even support NCQ (http://en.wikipedia.org/wiki/Native_command_queuing). You should also make sure you're running them in AHCI mode and not IDE emulated mode. The setting for this is in your BIOS, however if it is not already set to AHCI you'll have to reinstall your OS when you change it.

This ^^^^


Also, you mentioned your router and ethernet cards, what about your modem? Is it gigabit capable?

virulosity
07-11-2010, 3:05 PM
My gateway isn't gigabit, but the switch that these devices are attached to (downstream of gateway) is gigabit. I will check into the AHCI setting. Thanks :)

FluorideInMyWater
07-11-2010, 3:06 PM
cat6 is not capable of handling gigabyte traffic. u need fiber for it.
check the listing below

HSDPA (cat.6) conversion chart
Data transfer rate, Computer Connection Speed
One HSDPA (cat.6) is equal to:

Basic Units of Data Transfer Rate
terabit per second 3.6 *10-06
gigabit per second 0.0036
megabit per second 3.6
kilobit per second 3600
bit per second 3600000


Byte-based Transfer Rate Units
terabyte per second 4.092726 *10-07
gigabyte per second 0.0004190952
megabyte per second 0.4291534
kilobyte per second 439.4531
byte per second 450000

Stockton
07-11-2010, 3:23 PM
We have several customers using gig e via cat6 circuits. It's capable.

odysseus
07-11-2010, 3:37 PM
cat6 is not capable of handling gigabyte traffic. u need fiber for it.
check the listing below

You need to expand on the conversation. There are different standards for what is "Gig E", and one of these standards is capable over cat5e and cat6 copper. True performance throughput and strand lengths are certainly better qualified on fiber optic standards.

https://secure.wikimedia.org/wikipedia/en/wiki/Gigabit_Ethernet

bbguns44
07-11-2010, 3:39 PM
"cat6 is not capable of handling gigabyte traffic. u need fiber for it."

Wrong. Cat 6 runs Gb with no problem. Any limitations are due to
slow software.

bigmike82
07-11-2010, 3:41 PM
"cat6 is not capable of handling gigabyte traffic. u need fiber for it."
Erm, no? Try again. 1000Base-T is a standard for Cat5, Cat5e AND Cat6.

Hell, Cat6 is even capable of 10Gig speeds.

sfwdiy
07-11-2010, 3:50 PM
Those are first generation SATA drives, not exactly fast. You should upgrade to SATA-II (3 Gb/s) or SATA-III (6 Gb/s). Your drives probably don't even support NCQ (http://en.wikipedia.org/wiki/Native_command_queuing). You should also make sure you're running them in AHCI mode and not IDE emulated mode. The setting for this is in your BIOS, however if it is not already set to AHCI you'll have to reinstall your OS when you change it.

Good link on native command queueing. I learn something new every day.

--B

JDay
07-11-2010, 3:52 PM
Also, you mentioned your router and ethernet cards, what about your modem? Is it gigabit capable?

That doesn't matter, he's not going to get gigabit internet speeds. The ports on the switch should auto sense the ethernet speed of the modem and set it accordingly.

JDay
07-11-2010, 3:56 PM
cat6 is not capable of handling gigabyte traffic. u need fiber for it.
check the listing below

HSDPA (cat.6) conversion chart

What does this have to do with HSDPA (3G Data)? Category 6 cable can handle up to 10 Gbit/sec.

http://en.wikipedia.org/wiki/Cat_6#Category_6a

Category 6 cable, commonly referred to as Cat-6, is a cable standard for Gigabit Ethernet and other network protocols that are backward compatible with the Category 5/5e and Category 3 cable standards. Compared with Cat-5 and Cat-5e, Cat-6 features more stringent specifications for crosstalk and system noise. The cable standard provides performance of up to 250 MHz and is suitable for 10BASE-T, 100BASE-TX (Fast Ethernet), 1000BASE-T / 1000BASE-TX (Gigabit Ethernet) and 10GBASE-T (10-Gigabit Ethernet). Category 6 cable has a reduced maximum length when used for 10GBASE-T; Category 6a cable, or Augmented Category 6, is characterized to 500 MHz and has improved alien crosstalk characteristics, allowing 10GBASE-T to be run for the same distance as previous protocols. Category 6 cable can be identified by the printing on the side of the cable sheath.[1]

jmlivingston
07-11-2010, 4:16 PM
I just bought a second computer, both of my machines have Gigabit network adapters. So I went and bought a 5 port gigabit switch and a pair of CAT6 cables to see what kind of transfer speeds I could get. I am only getting 11MB/s when transferring files. If I multiply by 8, this is only 88Mb/s, less than a 10/100 100Megabit connection. So what gives?

What are your ports set at for speed and duplex?

Many SOHO grade switches aren't truly capable of handling gigabit line rates. You may very well be seeing constraints on memory too, even if you don't see physical memory go to 100% there may be artificial constraints by the OS or the NIC drivers on how much memory that a network transfer may be assigned. Windows desktop operating systems are also notoriously slow at large/bulk data transfers.

In the real world you'll rarely see true 100% sustained utilization on a gigabit ethernet network, my experience has been that it generally tops out about 92-96% and that's even on big Cisco gear (i.e. Cat 6509s). If you're running TCP traffic don't forget to calculate in latency and TCP-window-size (http://www.babinszki.com/Networking/Max-Ethernet-and-TCP-Throughput.html). High speed TCP data transfers are hard to sustain when there's any latency involved.

As others have already said, gigabit will run just fine on Cat5e cabling and does not need Cat6.

John

JDay
07-11-2010, 4:34 PM
Windows desktop operating systems are also notoriously slow at large/bulk data transfers.

That issue was fixed back in Vista SP1, I haven't noticed any network performance issues with Windows 7.

nick
07-11-2010, 5:02 PM
I just bought a second computer, both of my machines have Gigabit network adapters. So I went and bought a 5 port gigabit switch and a pair of CAT6 cables to see what kind of transfer speeds I could get. I am only getting 11MB/s when transferring files. If I multiply by 8, this is only 88Mb/s, less than a 10/100 connection. So what gives?

There're many other factors affecting the actual transfer. First of all, make sure the link you're getting is actually 1Gbps. Secondly, the switch may not be powerful enough to give you the transfer rate you're looking for. The TCP/IP stack in your OS many not be too efficient. The hardware on both ends might be affecting it. You might have some other activity going on that affects it (say, something else uses the hard drive extensively on either end, the desktop firewalls on both ends affect it, etc.). Your cables may be way too long. You might have some external factor, such as EMI, affecting the transmission. You may have faulty NICs.

Once verifying that the hardware is what it should be, I'd check for retransmissions and for unrelated activity on both ends. I'd also reset the TCP/IP stack, especially if you're using Windows. A bunch of stuff you install add to it, to the point that it becomes really inefficient.

This is just something to start with.

nick
07-11-2010, 5:07 PM
Never mind, looks like the drive on the other machine was the issue. It helps reading the threat first :)

brianinca
07-11-2010, 6:01 PM
>>>
OK, I did a copy on the other machine and that is the problem, its only 11MB/s write speed.
>>>

That drive is likely in the process of dying. When you get a huge reduction in read/write performance, the cause is frequently the drive being ready to kick the bucket. The system may still boot and finally get going, but it will go TU soon enough. Drives are cheap, swap and clone it back.

As for SATA performance, I built up a SuperMicro server for my office back in '06, 8xSATA hotswap drives on the not-very-spendy MCP550Pro chipset from Nvidia, with first-gen 420GB WDC RE's in a RAID 1+0 config. I had two older servers I was consolidating, and saw sustained 58 MB/sec writes to that array copying both servers simultaneously.

On the other hand, contemporary Dell 2600's with 3xSCSI 10K rpm drives in a RAID5 array struggled to get 25 MB/sec, so lots of factors go into network performance.

It should be no problem these days to see 90+ MB/sec across the wire with DECENT quality inexpensive switches, RAID controllers, GB NICs and magnetic SATA drives.

Regards,
Brian in CA

jmlivingston
07-11-2010, 6:09 PM
That issue was fixed back in Vista SP1, I haven't noticed any network performance issues with Windows 7.

That's good to know. We're still a WinXP shop, getting ready to begin rolling out Win7 this month. I've had some test systems cross my desk mostly for testing wireless and vpn compatibility but that's it. While I have a Vista system here at home, I just don't have any needs for high-bandwidth transfers to see how it performs.

virulosity
07-12-2010, 7:44 AM
>>>
OK, I did a copy on the other machine and that is the problem, its only 11MB/s write speed.
>>>

That drive is likely in the process of dying. When you get a huge reduction in read/write performance, the cause is frequently the drive being ready to kick the bucket.

Well that is the new computer! I think the problem is that its a notebook HD, the computer I got is a lenovo q150. Also, its running XP SP3, so there may be some inefficiency there as mentioned before.

brianinca
07-12-2010, 9:05 AM
>>>
Well that is the new computer! I think the problem is that its a notebook HD, the computer I got is a lenovo q150. Also, its running XP SP3, so there may be some inefficiency there as mentioned before.
>>>>

No way. Something is wrong. How are you measuring your read/write performance?

Regards,
Brian in CA

danito
07-13-2010, 9:36 PM
When troubleshooting I would check the easy stuff first, speed AND duplex should be hard set to 1000/full. It is not really good practice to use the AUTO detect settings on the NIC. But the OP does touch on a intesting subject. I use IPERF for bandwidth testing and I have a never seen a windows machine that even comes close to actual gb though put. I suspect this might have something to do with an IPV4 convention.

jmlivingston
07-13-2010, 10:32 PM
Don't set the duplex to "Full" unless you can do it on both sides of a link. Setting one side to full and leaving the other at auto may cause a mismatch which will significantly degrade performance.

When running iperf to test max performance, are you running this as TCP or UDP traffic? If it's TCP then sliding-window will cause the session to peak at just over 80% utilization if there's even 1ms of round-trip latency.

danito
07-14-2010, 9:30 AM
When running iperf to test max performance, are you running this as TCP or UDP traffic? If it's TCP then sliding-window will cause the session to peak at just over 80% utilization if there's even 1ms of round-trip latency.

I try to keep it as simple as possible and use the defualt tcp option then specify a file (-F option) of a known size

Client Side:
iperf -c xxx.xxx.xxx.xxx F [file_name]

Server Side:
iperf -s

I would be interested to hear your thoughts and or recomendations

-Danny

command_liner
07-17-2010, 9:19 AM
Run a few tests and you will find all sorts of interesting cache and window
bottlenecks. A while back I did extensive testing on RAID systems to learn
actual performance of the installed systems. Real performance varied by
a factor of one thousand! Unless you have some knowledge of the
underlying file system IO parameters, you cannot measure any real network
traffic speeds for file copy. Note that on Windows systems, IO performance
drops off by a factor of 100 as you approach fullness on the file system.
Checking performance between 90% and 99% full will give you meaningless
numbers. Even above 80% it starts to get bad.

Use dd if=/dev/zero of=some_file bs=x count=y for a wide variety of x and y.
Then do the inverse test of reading.
Then do the same sort of logic with nc for a remote system -- no disks.
Then do the same sort of test with a combo of nc and dd to test real
io and network speeds combined.
Finally do the 4-way test with dd and nc at both ends with and without
real files with a variety of block sizes for the transfer at the dd level
and the nc level, and vary the transmission prototocol.

For comparison, use NFS and use dd for remote mounted systems.
Then use a few tests with RAMdisks.

All this is really easy to do on Unix, linux and OSX systems, and not so
bad under Windows using SFU.

FluorideInMyWater
07-17-2010, 9:39 AM
"cat6 is not capable of handling gigabyte traffic. u need fiber for it."

Wrong. Cat 6 runs Gb with no problem. Any limitations are due to
slow software.

no, not slow software. we are talking OSI layers physical, and data-link here. application layer is way up the chain. the hardware manages the transfer of the data physically and has its own little bios installed on it.

as far as the charting, i just looked for some data and thew it up. could be wrong. if so my appologies.

can also depend on your read/write time on a HD. if ur operating in a server environment w/ NAS and other high-level RAID devices where you have 100's of disks, your read/write/transfer time is going to dramatically go up, but you still probably wont get the tru benefit w/out the fiber, unless you start teaming NICs together.

in my environment i used solar-winds to do benchmark tests on data transfer thu cat5's and 6's and they just never compared anywhere close to fiber. this is of course IME.

but just because you set ur nic to gig/full-duplex does not mean ur gonna get it, especially w/ low end produducts. if your running cisco routers and switches, then yes, but d-links, linksys (yes, owned by cisco) and others, ur not going to get it, realistically.........IMO/IME :)

blisster
07-17-2010, 10:31 AM
I've got a gig-e SAN built with a simple Cisco gigabit switch and Cat5e patch cables (no link aggregation) and I reliably see throughput in the >900Mbps range between disk arrays (Netapp filers) passing NFS traffic. ISCSI is *slightly* lower but still in the >850Mbps range.


your primary bottleneck is almost ALWAYS going to be your HDD/Storage in a traditional computing enviornment.

FluorideInMyWater
07-17-2010, 10:35 AM
I've got a gig-e SAN built with a simple Cisco gigabit switch and Cat5e patch cables (no link aggregation) and I reliably see throughput in the >900Mbps range between disk arrays (Netapp filers) passing NFS traffic. ISCSI is *slightly* lower but still in the >850Mbps range.


your primary bottleneck is almost ALWAYS going to be your HDD/Storage in a traditional computing enviornment.

yah netapps rocks. i had a bunch of FAS380S i think. great thruput

blisster
07-17-2010, 11:02 AM
yah netapps rocks. i had a bunch of FAS380S i think. great thruput

yeah, we've just bought into the tech and went with FAS2020's (the entry level models) with SATA arrays. We're pushing the limits of the SATA array at this point and will probably move to SAS shelves for out VM and messaging enviornment next year.

FluorideInMyWater
07-17-2010, 11:48 AM
yeah, we've just bought into the tech and went with FAS2020's (the entry level models) with SATA arrays. We're pushing the limits of the SATA array at this point and will probably move to SAS shelves for out VM and messaging enviornment next year.

great idea.

i had 10 fujitsu/seimens server w/ (8) quad core processors that ran VMware. then we ran the actual data/files on the Netapps. was amazing. but we also had 12 netapps heads and about 600 disks in all. but lemme tell ya...YEEEEHAAAAAA!!! all of them fault tollerant w/ quad fiber bundles on them. that was some serious $$$ tho

jmlivingston
07-17-2010, 11:58 AM
The shop I'm working in these days is running a variety of IBM and HP blade servers, mostly booting from Netapp SAN over 4GB FiberChannel. I'm currently working on a project to migrate this to 10Gb FCoE with Nexus 5020 switches, we're just going to skip across 8GB FC.

We love our Netapps. :)

daveinwoodland
07-17-2010, 12:07 PM
I have a non Windows set up and across my Giga Bit network I can copy a 6GB file in approx 4 minutes, using Cat5 and a gigabit switch.

FluorideInMyWater
07-17-2010, 12:12 PM
The shop I'm working in these days is running a variety of IBM and HP blade servers, mostly booting from Netapp SAN over 4GB FiberChannel. I'm currently working on a project to migrate this to 10Gb FCoE with Nexus 5020 switches, we're just going to skip across 8GB FC.

We love our Netapps. :)

yeah, netapps really got it right. but heck, its just a unix FS..........so.......not surprised.

jmlivingston
07-17-2010, 12:36 PM
I have a non Windows set up and across my Giga Bit network I can copy a 6GB file in approx 4 minutes, using Cat5 and a gigabit switch.

That's not very good network performance. Since 6GB = 6,000,000,000 bits and there's 240 seconds in 4 minutes you're getting well under 25 Mb on your transfers.

6,000,000,000 bits / 240 seconds = 25,000,000 bits/sec or 25 Mbit/sec

Some storage manufacturers insist on using 1024 in their measurements lets look at it that way. Even considering that, it's not much of an improvement in your network performance. In this situation 6GB = 6,144,000,000 bits.

6,144,000,000 / 240 seconds = 25,600,000 bits/sec or 25.6 Mb/sec

FluorideInMyWater
07-17-2010, 1:17 PM
That's not very good network performance. Since 6GB = 6,000,000,000 bits and there's 240 seconds in 4 minutes you're getting well under 25 Mb on your transfers.

6,000,000,000 bits / 240 seconds = 25,000,000 bits/sec or 25 Mbit/sec

Some storage manufacturers insist on using 1024 in their measurements lets look at it that way. Even considering that, it's not much of an improvement in your network performance. In this situation 6GB = 6,144,000,000 bits.

6,144,000,000 / 240 seconds = 25,600,000 bits/sec or 25.6 Mb/sec

yah b/c its all binary so always use 1024 in any computer calculation.

odysseus
07-17-2010, 3:29 PM
that was some serious $$$ tho

Y E S.