docker - Huge performance difference with Iperf between - with and without VPN Tunnel -


i running performance measures between different network settings using iperf. see drastic differences between 2 basic setups.

  1. two containers (docker) connected each other via default docker0 bridge interface in host.
  2. two containers connected via vpntunnel interface internally connected via above docker0 bridge.

iperf calculation both scenarios 10sec

**scenario 1 (1)**    client connecting 172.17.0.4, tcp port 5001 tcp window size: 1.12 mbyte (default) ------------------------------------------------------------ [  3] local 172.17.0.2 port 50728 connected 172.17.0.4 port 5001 [ id] interval       transfer     bandwidth [  3]  0.0- 1.0 sec  3.26 gbytes  28.0 gbits/sec [  3]  1.0- 2.0 sec  3.67 gbytes  31.5 gbits/sec [  3]  2.0- 3.0 sec  3.70 gbytes  31.8 gbits/sec [  3]  3.0- 4.0 sec  3.93 gbytes  33.7 gbits/sec [  3]  4.0- 5.0 sec  3.34 gbytes  28.7 gbits/sec [  3]  5.0- 6.0 sec  3.44 gbytes  29.6 gbits/sec [  3]  6.0- 7.0 sec  3.55 gbytes  30.5 gbits/sec [  3]  7.0- 8.0 sec  3.50 gbytes  30.0 gbits/sec   [  3]  8.0- 9.0 sec  3.41 gbytes  29.3 gbits/sec [  3]  9.0-10.0 sec  3.20 gbytes  27.5 gbits/sec [  3]  0.0-10.0 sec  35.0 gbytes  30.1 gbits/sec   **scenario 2 (2)**  client connecting 10.23.0.2, tcp port 5001 tcp window size: 85.0 kbyte (default) ------------------------------------------------------------ [  3] local 10.12.0.2 port 41886 connected 10.23.0.2 port 5001 [ id] interval       transfer     bandwidth [  3]  0.0- 1.0 sec  15.1 mbytes   127 mbits/sec [  3]  1.0- 2.0 sec  14.9 mbytes   125 mbits/sec [  3]  2.0- 3.0 sec  14.9 mbytes   125 mbits/sec [  3]  3.0- 4.0 sec  14.2 mbytes   120 mbits/sec [  3]  4.0- 5.0 sec  16.4 mbytes   137 mbits/sec [  3]  5.0- 6.0 sec  18.0 mbytes   151 mbits/sec [  3]  6.0- 7.0 sec  18.6 mbytes   156 mbits/sec [  3]  7.0- 8.0 sec  16.4 mbytes   137 mbits/sec [  3]  8.0- 9.0 sec  13.5 mbytes   113 mbits/sec [  3]  9.0-10.0 sec  15.0 mbytes   126 mbits/sec [  3]  0.0-10.0 sec   157 mbytes   132 mbits/sec 

i confused high differences in throughput.

is due encryption , decryption , openssl involved makes degradation?

or because of need unmarshalling , marshalling of packet headers below application layer more once when routing through vpn tunnel?

thank you
shabir

both tests did not run equally - first test ran tcp window of 1.12 mbyte, while second slower test ran window of 0.085 mbyte:

client connecting 172.17.0.4, tcp port 5001 tcp window size: 1.12 mbyte (default)                  ^^^^  client connecting 10.23.0.2, tcp port 5001 tcp window size: 85.0 kbyte (default)                  ^^^^ 

thus, it's possible you're experiencing tcp window exhaustion, both because of smaller buffer , because of mildly increased latency through vpn stack.

in order know buffer size use (if not huge buffer), need know bandwidth-delay product.

i don't know original channel rtt is, can take stab @ it. able ~30 gbit/sec on link buffer size of 1.12 mbytes, doing math backwards (unit conversions elided), get:

1.12 megabytes / 30 gigabits/sec --> 0.3 ms.

that seems reasonable. let's assume vpn has double rtt original link, we'll assume latency of 0.6 ms. then, we'll use new window size of 0.085 mbyte figure out kind of performance should expect calculating bandwidth-delay product forwards:

0.085 mbytes / 0.6 ms --> bdp = 141 mbit/sec.  

well, know, that's exact performance you're seeing.

if, example, wanted saturate 100 gigabit/sec pipe rtt of 0.6 ms, need buffer size of 7.5 mbytes. alternatively if wanted saturate pipe not single connection n connections, you'd need n sockets each send buffer size of 7.5/n mbytes.


Comments

Popular posts from this blog

ZeroMQ on Windows, with Qt Creator -

unity3d - Unity SceneManager.LoadScene quits application -

python - Error while using APScheduler: 'NoneType' object has no attribute 'now' -