Showing posts with label iPerf. Show all posts
Showing posts with label iPerf. Show all posts

Friday, September 1, 2023

An iPerf3 Guide

 

In today's digital age, where fast and reliable network connectivity is a necessity, ensuring that your network performs at its best is crucial. This is where iperf3, a versatile and powerful tool, comes into play. In this blog post, we'll dive into how you can use iperf3 to conduct network performance tests and optimize your network for peak efficiency.

 

What is iperf3?

 

Iperf3 is an open-source command-line tool specifically designed for measuring network performance. It's a successor to the original iperf and offers various improvements and additional features. Iperf3 allows you to determine crucial network metrics, such as bandwidth, latency, and packet loss, giving you a comprehensive view of your network's capabilities.

 

Getting Started with iperf3

 

Before you begin testing your network's performance, you need to set up iperf3. It's relatively straightforward:

 

Install iperf3: You can install iperf3 using your server’s package manager or you can download iperf3 executable/installer from official webpage.

 

Choose Server and Client: Decide which machine will act as the server and which as the client. The server will listen for test connections, while the client will initiate the tests.

 

Prepare your Client and Server: By default iperf3 runs on 5201/TCP. You can change it to other port/protocol of your choice. 

 

You need to open your firewalls (if any) to allow ingress and egress for the iperf3 traffic. ON cloud environment, you need to open up Security Group/Security List or the equivalent associated with your subnet.

 

If you intent to use different port, specify the port using ‘-p/--port’ switch.

You may use UDP based test instead of the default TCP test by specifying ‘-u/--udp’ switch.

 

Start the Server: On the server machine, open a terminal and run the following command:

 

       [root@server ~]# iperf3 -s -p 5201
       --------------------------------------------------------
       Server listening on 5201
       --------------------------------------------------------

 

As you can see the server started to listen on port 5201/TCP.

 

Perform the Test: On the client machine, open a terminal and run the following command:

 

         [root@client ~]#  iperf3 -c <server_ip> -t 60 -p 5201

 

Replace <server_ip> with the IP address of your server.

 

 

Running Basic Tests

Now that you have iperf3 set up, let's perform some basic tests to evaluate your network's performance:

 

Bandwidth Test: To measure the maximum bandwidth between the client and server, run:

 

      # iperf3 -c <server_ip> -t 10 -b 100M

 

This command will run the test for 10 seconds, simulating traffic at a rate of 100 Mbps.

 

Latency Test: To measure the round-trip time (RTT) or latency, run:

 

              # iperf3 -c <server_ip> -t 10 -u -b 1M

 

This command runs a 10-second UDP test with a bandwidth of 1 Mbps.

 

Packet Loss Test: To test for packet loss, run:

 

         #iperf3 -c <server_ip> -t 10 -u -b 100M -l 1400

 

This command sends UDP packets with a size of 1400 bytes at a rate of 100 Mbps for 10 seconds. You can adjust the packet size and duration as needed.

 

 

The best way to find out the bandwidth/throughput that can be achieve between 2 endpoints is to use multiple parallel streams of iperf3 using the ‘-P <n>’ switch, where n is the number of parallel streams. The total bandwidth utilized will be displayed at the end of the test result.

I would also suggest to start with UDP as this would give you a true baseline of the available bandwidth achievable. 

 

Use the following command to get a baseline of bandwidth for  UDP using maximum bitrate. Setting the target bitrate (-b) to 0 will  disable bitrate limits:

            

            # iperf -c <target IP> -P8 -u -b 300M

 

After you figure out the maximum bandwidth achieved with UDP tests, you may follow up with the following to see when the degradation occurs.

  • UDP testing: - iperf3 -c <target IP> -P8 -u -b 100M ; Start with conservative bandwidth

  • UDP testing: - iperf3 -c <target IP> -P8 -u -b 1000M ; Increase it closer to your expected bandwidth.

 

  • TCP testing: - iperf3 -c <target IP> -n 100M -P8 -w 32K ; Start with lower -w flag

  • TCP testing: - iperf3 -c <target IP> -n 100M -P8 -w 128K ; Raise -w flag to assess if throughput scales upwards.

 

 

If TCP based tests do not co-related with UDP based tests and the bandwidth doesn’t scale upwards as you increase the window size, you should consider TCP tuning. But do keep in mind the overheads associated with TCP and hence the UDP performs bit better than TCP.

 

 

You should also make use of other tools like MTR/Traceroute to determine any issues along the network path.


 

Tests for your reference:

 

These tests are run from instances with 1Gbps NIC.

 

Client:

 

[root@client ~]# iperf3 -c <Server IP> -t 60 -p 5201 -V
iperf 3.1.7
Linux client 3.10.0-1160.92.1.el7.x86_64 #1 SMP Tue Jun 20 11:48:01 UTC 2023 x86_64
Control connection MSS 8948
Time: Thu, 03 Aug 2023 11:45:34 GMT
Connecting to host 144.24.213.222, port 5201
      Cookie: client.1691063134.817371.519df38409
      TCP MSS: 8948 (default)
[  4] local 10.0.0.205 port 39504 connected to <Server IP> port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 60 second test
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   128 MBytes  1.07 Gbits/sec    0   3.01 MBytes       
[  4]   1.00-2.00   sec   125 MBytes  1.05 Gbits/sec    0   3.01 MBytes       
[  4]   2.00-3.00   sec   124 MBytes  1.04 Gbits/sec    0   3.01 MBytes       
[..Redacted..]       
[  4]  52.00-53.00  sec  97.5 MBytes   818 Mbits/sec   36   2.92 MBytes       
[  4]  53.00-54.00  sec   125 MBytes  1.05 Gbits/sec    0   3.02 MBytes       
[  4]  54.00-55.00  sec   124 MBytes  1.04 Gbits/sec    0   3.02 MBytes       
[  4]  55.00-56.00  sec   124 MBytes  1.04 Gbits/sec    0   3.02 MBytes       
[  4]  56.00-57.00  sec  93.8 MBytes   786 Mbits/sec   13   2.11 MBytes       
[  4]  57.00-58.00  sec   128 MBytes  1.07 Gbits/sec    5   3.08 MBytes       
[  4]  58.00-59.00  sec   124 MBytes  1.04 Gbits/sec    0   3.08 MBytes       
[  4]  59.00-60.00  sec   125 MBytes  1.05 Gbits/sec    0   3.08 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  6.99 GBytes  1.00 Gbits/sec  289             sender
[  4]   0.00-60.00  sec  6.99 GBytes  1.00 Gbits/sec                  receiver
CPU Utilization: local/sender 2.0% (0.1%u/2.0%s), remote/receiver 6.1% (0.5%u/5.6%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubic
iperf Done.
[root@client ~]#

 

 

Server:

 

[root@server ~]# iperf3 -s 
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from <Client IP>, port 39502
[  5] local 10.20.0.171 port 5201 connected to < ClientIP> port 39504
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   120 MBytes  1.01 Gbits/sec                  
[  5]   1.00-2.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   2.00-3.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   3.00-4.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   4.00-5.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   5.00-6.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   6.00-7.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   7.00-8.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   8.00-9.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   9.00-10.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  10.00-11.00  sec  97.1 MBytes   814 Mbits/sec                  
[  5]  11.00-12.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  48.00-49.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  49.00-50.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  50.00-51.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  51.00-52.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  52.00-53.00  sec  97.9 MBytes   821 Mbits/sec                  
[..Redacted..]
[  5]  53.00-54.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  54.00-55.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  55.00-56.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  56.00-57.00  sec  98.3 MBytes   824 Mbits/sec                  
[  5]  57.00-58.00  sec   123 MBytes  1.03 Gbits/sec                  
[  5]  58.00-59.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  59.00-60.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  60.00-60.06  sec  7.48 MBytes  1.04 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.06  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.06  sec  6.99 GBytes   999 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
[root@server ~]# 

 

Analyzing Results

 

After running these tests, iperf3 will provide you with detailed results, including throughput, latency, and packet loss metrics. Analyze these results to identify network bottlenecks and areas for improvement.

 

Conclusion

 

Iperf3 is an invaluable tool for network administrators and enthusiasts alike. By regularly conducting network performance tests with iperf3, you can ensure that your network operates at its peak efficiency, identify and resolve issues proactively, and provide a seamless experience for all users. So, start testing your network today and unlock its full potential!