Thursday, September 7, 2023

The DIG command cheat sheet

 

We all heard that our bellowed DNS lookup tool ‘nslookup’ is getting deprecated. Even though the nslookup tool has been resurrected and is not going anywhere, its better to learn about another cool name lookup tool – ‘dig’.

 

What is ‘dig’?

'Dig,' short for Domain Information Groper, is a command-line utility designed for DNS queries. Dig is super useful for troubleshooting DNS problems.

 

Getting Started with 'dig'

 

Installation

 

Most RHEL installations include 'dig' by default. However, if it's not present, you can install it using the package manager 'yum':

$ sudo yum install bind-utils

 

Basic Usage and understanding the output

 

Let’s do a name query:

# dig google.com

 

This basic command will yield fundamental information, including the IP address associated with the domain.

 




Understanding the output:

  1. Lines beginning with ; are comments.
  2. The first line tell us the version of dig 9.11.4.
  3. Next, dig shows the header of the response it received from the DNS server.
  4. Next comes the question section, which simply tells us the query, which in this case is a query for the “A” record of google.com. The IN means this is an Internet lookup (in the Internet class).
  5. The answer section tells us that google.com has the IP address 142.250.184.206
  6. Lastly there are some stats about the query.

 

Some Quick Tips:

-       You may turn off the stats using the +nostats option.

-       You may use +short to make the output lot more readable.



Quick look at the important dig commands:


Query Domain “A” Record

# dig google.com +short

Query MX record

# dig google.com MX +short

Query SOA Record

# dig google.com SOA +short

Query TTL Record

# dig google.com TTL +short

Query only Answer

# dig google.com +nocomments +noquestion +noauthority +noadditional +nostats

Query All DNS records

# dig google.com ANY +noall + answer

Reverse DNS lookup

# dig -x 142.250.184.206 + short

Query a Specify DNS server

# dig @8.8.8.8 google.com +short

Trace DNS Query Path

# dig google.com +trace



Set up default options for dig

You may setup a per-user defaults for dig by creating a file ${HOME}/.digrc (a file with name .digrc under each user’s home directory) with the required default options. 

 

[user@node1 ~]$ cat ${HOME}/.digrc

+short

[user@node1 ~]$ dig google.com

142.250.184.206


Let’s take a look at some of the important options.

-4 : Does an IPv4 only Query

-6 : Does an IPv6 only query

-b address[#port] : Bind the query to the host’s IP and port.

-p port : Sent the query to the port specified. This must be used if the DNS server is listening on a non-standard port other than 53/UDP.

-x address : Used for reverse lookup.



Now some Query Options.

Dig provides us with a number of query options that helps in the way query is made and how the results are displayed. 

 

Query options are prefixed with +. Some keywords can be negated by prefixing no after the + sign like +noall.

 

+[no]all : Sets or clears all display flags. Querying with +noall will return a empty result, you need to add the query options for the required section.

+[no]answer : Display [or not display] the answer section. Handy to be combine options like +noall +answer for nice and readable reply.

+nocmd : This will remove the initial comment section showing the dig version.

+nocomments : Removed the comment lines in the output, default is +comment.

+fail : Client will retry the next nameserver in case of a SERVFAIL. Default is +nofail and will not try the next server.

+noquestion : Do not print the question section when an answer s returned.

+ndots=D : Set the number of dots that have to appear in name to D for it to be considered absolute. The default value is that defined using the ndots statement in /etc/resolv.conf, or 1 if no ndots statement is present. Names with fewer dots are interpreted as relative names and will be searched for in the domains listed in the search or domain directive in /etc/resolv.conf if +search is set.

+short : Provides the most concise answer.

+nostats : Does not print the query statistics such as the time, size and so on.

+timeout=N : Sets the timeout for a query to N seconds. Default is 5 seconds.

+ trace : Enable tracing of the delegation path from the root name servers for the name being looked up. Dig makes iterative queries to resolve the name. It will follow referrals from the root servers, showing the answer from each server that was used to resolve the lookup.

+tries=N : Sets the number of UDP retries to server instead of the default 3.










Friday, September 1, 2023

An iPerf3 Guide

 

In today's digital age, where fast and reliable network connectivity is a necessity, ensuring that your network performs at its best is crucial. This is where iperf3, a versatile and powerful tool, comes into play. In this blog post, we'll dive into how you can use iperf3 to conduct network performance tests and optimize your network for peak efficiency.

 

What is iperf3?

 

Iperf3 is an open-source command-line tool specifically designed for measuring network performance. It's a successor to the original iperf and offers various improvements and additional features. Iperf3 allows you to determine crucial network metrics, such as bandwidth, latency, and packet loss, giving you a comprehensive view of your network's capabilities.

 

Getting Started with iperf3

 

Before you begin testing your network's performance, you need to set up iperf3. It's relatively straightforward:

 

Install iperf3: You can install iperf3 using your server’s package manager or you can download iperf3 executable/installer from official webpage.

 

Choose Server and Client: Decide which machine will act as the server and which as the client. The server will listen for test connections, while the client will initiate the tests.

 

Prepare your Client and Server: By default iperf3 runs on 5201/TCP. You can change it to other port/protocol of your choice. 

 

You need to open your firewalls (if any) to allow ingress and egress for the iperf3 traffic. ON cloud environment, you need to open up Security Group/Security List or the equivalent associated with your subnet.

 

If you intent to use different port, specify the port using ‘-p/--port’ switch.

You may use UDP based test instead of the default TCP test by specifying ‘-u/--udp’ switch.

 

Start the Server: On the server machine, open a terminal and run the following command:

 

       [root@server ~]# iperf3 -s -p 5201
       --------------------------------------------------------
       Server listening on 5201
       --------------------------------------------------------

 

As you can see the server started to listen on port 5201/TCP.

 

Perform the Test: On the client machine, open a terminal and run the following command:

 

         [root@client ~]#  iperf3 -c <server_ip> -t 60 -p 5201

 

Replace <server_ip> with the IP address of your server.

 

 

Running Basic Tests

Now that you have iperf3 set up, let's perform some basic tests to evaluate your network's performance:

 

Bandwidth Test: To measure the maximum bandwidth between the client and server, run:

 

      # iperf3 -c <server_ip> -t 10 -b 100M

 

This command will run the test for 10 seconds, simulating traffic at a rate of 100 Mbps.

 

Latency Test: To measure the round-trip time (RTT) or latency, run:

 

              # iperf3 -c <server_ip> -t 10 -u -b 1M

 

This command runs a 10-second UDP test with a bandwidth of 1 Mbps.

 

Packet Loss Test: To test for packet loss, run:

 

         #iperf3 -c <server_ip> -t 10 -u -b 100M -l 1400

 

This command sends UDP packets with a size of 1400 bytes at a rate of 100 Mbps for 10 seconds. You can adjust the packet size and duration as needed.

 

 

The best way to find out the bandwidth/throughput that can be achieve between 2 endpoints is to use multiple parallel streams of iperf3 using the ‘-P <n>’ switch, where n is the number of parallel streams. The total bandwidth utilized will be displayed at the end of the test result.

I would also suggest to start with UDP as this would give you a true baseline of the available bandwidth achievable. 

 

Use the following command to get a baseline of bandwidth for  UDP using maximum bitrate. Setting the target bitrate (-b) to 0 will  disable bitrate limits:

            

            # iperf -c <target IP> -P8 -u -b 300M

 

After you figure out the maximum bandwidth achieved with UDP tests, you may follow up with the following to see when the degradation occurs.

  • UDP testing: - iperf3 -c <target IP> -P8 -u -b 100M ; Start with conservative bandwidth

  • UDP testing: - iperf3 -c <target IP> -P8 -u -b 1000M ; Increase it closer to your expected bandwidth.

 

  • TCP testing: - iperf3 -c <target IP> -n 100M -P8 -w 32K ; Start with lower -w flag

  • TCP testing: - iperf3 -c <target IP> -n 100M -P8 -w 128K ; Raise -w flag to assess if throughput scales upwards.

 

 

If TCP based tests do not co-related with UDP based tests and the bandwidth doesn’t scale upwards as you increase the window size, you should consider TCP tuning. But do keep in mind the overheads associated with TCP and hence the UDP performs bit better than TCP.

 

 

You should also make use of other tools like MTR/Traceroute to determine any issues along the network path.


 

Tests for your reference:

 

These tests are run from instances with 1Gbps NIC.

 

Client:

 

[root@client ~]# iperf3 -c <Server IP> -t 60 -p 5201 -V
iperf 3.1.7
Linux client 3.10.0-1160.92.1.el7.x86_64 #1 SMP Tue Jun 20 11:48:01 UTC 2023 x86_64
Control connection MSS 8948
Time: Thu, 03 Aug 2023 11:45:34 GMT
Connecting to host 144.24.213.222, port 5201
      Cookie: client.1691063134.817371.519df38409
      TCP MSS: 8948 (default)
[  4] local 10.0.0.205 port 39504 connected to <Server IP> port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 60 second test
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   128 MBytes  1.07 Gbits/sec    0   3.01 MBytes       
[  4]   1.00-2.00   sec   125 MBytes  1.05 Gbits/sec    0   3.01 MBytes       
[  4]   2.00-3.00   sec   124 MBytes  1.04 Gbits/sec    0   3.01 MBytes       
[..Redacted..]       
[  4]  52.00-53.00  sec  97.5 MBytes   818 Mbits/sec   36   2.92 MBytes       
[  4]  53.00-54.00  sec   125 MBytes  1.05 Gbits/sec    0   3.02 MBytes       
[  4]  54.00-55.00  sec   124 MBytes  1.04 Gbits/sec    0   3.02 MBytes       
[  4]  55.00-56.00  sec   124 MBytes  1.04 Gbits/sec    0   3.02 MBytes       
[  4]  56.00-57.00  sec  93.8 MBytes   786 Mbits/sec   13   2.11 MBytes       
[  4]  57.00-58.00  sec   128 MBytes  1.07 Gbits/sec    5   3.08 MBytes       
[  4]  58.00-59.00  sec   124 MBytes  1.04 Gbits/sec    0   3.08 MBytes       
[  4]  59.00-60.00  sec   125 MBytes  1.05 Gbits/sec    0   3.08 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  6.99 GBytes  1.00 Gbits/sec  289             sender
[  4]   0.00-60.00  sec  6.99 GBytes  1.00 Gbits/sec                  receiver
CPU Utilization: local/sender 2.0% (0.1%u/2.0%s), remote/receiver 6.1% (0.5%u/5.6%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubic
iperf Done.
[root@client ~]#

 

 

Server:

 

[root@server ~]# iperf3 -s 
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from <Client IP>, port 39502
[  5] local 10.20.0.171 port 5201 connected to < ClientIP> port 39504
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   120 MBytes  1.01 Gbits/sec                  
[  5]   1.00-2.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   2.00-3.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   3.00-4.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   4.00-5.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   5.00-6.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   6.00-7.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   7.00-8.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   8.00-9.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   9.00-10.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  10.00-11.00  sec  97.1 MBytes   814 Mbits/sec                  
[  5]  11.00-12.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  48.00-49.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  49.00-50.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  50.00-51.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  51.00-52.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  52.00-53.00  sec  97.9 MBytes   821 Mbits/sec                  
[..Redacted..]
[  5]  53.00-54.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  54.00-55.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  55.00-56.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  56.00-57.00  sec  98.3 MBytes   824 Mbits/sec                  
[  5]  57.00-58.00  sec   123 MBytes  1.03 Gbits/sec                  
[  5]  58.00-59.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  59.00-60.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  60.00-60.06  sec  7.48 MBytes  1.04 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.06  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.06  sec  6.99 GBytes   999 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
[root@server ~]# 

 

Analyzing Results

 

After running these tests, iperf3 will provide you with detailed results, including throughput, latency, and packet loss metrics. Analyze these results to identify network bottlenecks and areas for improvement.

 

Conclusion

 

Iperf3 is an invaluable tool for network administrators and enthusiasts alike. By regularly conducting network performance tests with iperf3, you can ensure that your network operates at its peak efficiency, identify and resolve issues proactively, and provide a seamless experience for all users. So, start testing your network today and unlock its full potential!