I saw a very entertaining youtube video where the host has tested 10 Gbits Ethernet cards on a LAN to perform file transfers. He somehow could not reach the maximum theoritical transfer speed. Here is my explanation why and it is probably that simple:
I do not know what was the protocol underneath for the transfer but if it is TCP, there is something called the bandwidth-delay product that says that in order to keep the network pipe full, you need sufficient TCP send and receive buffers.
Let me show you how to compute that. RTT is basically given by ping. Lets assume worse case scenario with a RTT of 0.2 mSec. 10Gbit/sec is 1250MB/sec or 1,250,000,000 B/sec.
1,250,000,000 B/sec * 0.0002 sec = 250,000B or 250KB. Last time I have checked (I'm a Linux guy) default Windows TCP buffers size were something around 16KB or perhaps 64KB. These are the parameters that you would need to play with to get close to the 10GB/sec rates.
You can change the default value through the registry or on a per connection basis with the socket API.
If we take your maximum transfer rate in your video which was roughly about 360MB/sec and plug it in the bandwidth delay product formula, I get 72KB which is very close to 64KB. Pretty sure that this is your problem!
I want you to find in this blog informations about C++ programming that I had a hard time to find in the first place on the web.
Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
<< < | Current | > >> | ||||
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |