Thursday, September 7, 2023

The DIG command cheat sheet

 

We all heard that our bellowed DNS lookup tool ‘nslookup’ is getting deprecated. Even though the nslookup tool has been resurrected and is not going anywhere, its better to learn about another cool name lookup tool – ‘dig’.

 

What is ‘dig’?

'Dig,' short for Domain Information Groper, is a command-line utility designed for DNS queries. Dig is super useful for troubleshooting DNS problems.

 

Getting Started with 'dig'

 

Installation

 

Most RHEL installations include 'dig' by default. However, if it's not present, you can install it using the package manager 'yum':

$ sudo yum install bind-utils

 

Basic Usage and understanding the output

 

Let’s do a name query:

# dig google.com

 

This basic command will yield fundamental information, including the IP address associated with the domain.

 




Understanding the output:

  1. Lines beginning with ; are comments.
  2. The first line tell us the version of dig 9.11.4.
  3. Next, dig shows the header of the response it received from the DNS server.
  4. Next comes the question section, which simply tells us the query, which in this case is a query for the “A” record of google.com. The IN means this is an Internet lookup (in the Internet class).
  5. The answer section tells us that google.com has the IP address 142.250.184.206
  6. Lastly there are some stats about the query.

 

Some Quick Tips:

-       You may turn off the stats using the +nostats option.

-       You may use +short to make the output lot more readable.



Quick look at the important dig commands:


Query Domain “A” Record

# dig google.com +short

Query MX record

# dig google.com MX +short

Query SOA Record

# dig google.com SOA +short

Query TTL Record

# dig google.com TTL +short

Query only Answer

# dig google.com +nocomments +noquestion +noauthority +noadditional +nostats

Query All DNS records

# dig google.com ANY +noall + answer

Reverse DNS lookup

# dig -x 142.250.184.206 + short

Query a Specify DNS server

# dig @8.8.8.8 google.com +short

Trace DNS Query Path

# dig google.com +trace



Set up default options for dig

You may setup a per-user defaults for dig by creating a file ${HOME}/.digrc (a file with name .digrc under each user’s home directory) with the required default options. 

 

[user@node1 ~]$ cat ${HOME}/.digrc

+short

[user@node1 ~]$ dig google.com

142.250.184.206


Let’s take a look at some of the important options.

-4 : Does an IPv4 only Query

-6 : Does an IPv6 only query

-b address[#port] : Bind the query to the host’s IP and port.

-p port : Sent the query to the port specified. This must be used if the DNS server is listening on a non-standard port other than 53/UDP.

-x address : Used for reverse lookup.



Now some Query Options.

Dig provides us with a number of query options that helps in the way query is made and how the results are displayed. 

 

Query options are prefixed with +. Some keywords can be negated by prefixing no after the + sign like +noall.

 

+[no]all : Sets or clears all display flags. Querying with +noall will return a empty result, you need to add the query options for the required section.

+[no]answer : Display [or not display] the answer section. Handy to be combine options like +noall +answer for nice and readable reply.

+nocmd : This will remove the initial comment section showing the dig version.

+nocomments : Removed the comment lines in the output, default is +comment.

+fail : Client will retry the next nameserver in case of a SERVFAIL. Default is +nofail and will not try the next server.

+noquestion : Do not print the question section when an answer s returned.

+ndots=D : Set the number of dots that have to appear in name to D for it to be considered absolute. The default value is that defined using the ndots statement in /etc/resolv.conf, or 1 if no ndots statement is present. Names with fewer dots are interpreted as relative names and will be searched for in the domains listed in the search or domain directive in /etc/resolv.conf if +search is set.

+short : Provides the most concise answer.

+nostats : Does not print the query statistics such as the time, size and so on.

+timeout=N : Sets the timeout for a query to N seconds. Default is 5 seconds.

+ trace : Enable tracing of the delegation path from the root name servers for the name being looked up. Dig makes iterative queries to resolve the name. It will follow referrals from the root servers, showing the answer from each server that was used to resolve the lookup.

+tries=N : Sets the number of UDP retries to server instead of the default 3.










Friday, September 1, 2023

An iPerf3 Guide

 

In today's digital age, where fast and reliable network connectivity is a necessity, ensuring that your network performs at its best is crucial. This is where iperf3, a versatile and powerful tool, comes into play. In this blog post, we'll dive into how you can use iperf3 to conduct network performance tests and optimize your network for peak efficiency.

 

What is iperf3?

 

Iperf3 is an open-source command-line tool specifically designed for measuring network performance. It's a successor to the original iperf and offers various improvements and additional features. Iperf3 allows you to determine crucial network metrics, such as bandwidth, latency, and packet loss, giving you a comprehensive view of your network's capabilities.

 

Getting Started with iperf3

 

Before you begin testing your network's performance, you need to set up iperf3. It's relatively straightforward:

 

Install iperf3: You can install iperf3 using your server’s package manager or you can download iperf3 executable/installer from official webpage.

 

Choose Server and Client: Decide which machine will act as the server and which as the client. The server will listen for test connections, while the client will initiate the tests.

 

Prepare your Client and Server: By default iperf3 runs on 5201/TCP. You can change it to other port/protocol of your choice. 

 

You need to open your firewalls (if any) to allow ingress and egress for the iperf3 traffic. ON cloud environment, you need to open up Security Group/Security List or the equivalent associated with your subnet.

 

If you intent to use different port, specify the port using ‘-p/--port’ switch.

You may use UDP based test instead of the default TCP test by specifying ‘-u/--udp’ switch.

 

Start the Server: On the server machine, open a terminal and run the following command:

 

       [root@server ~]# iperf3 -s -p 5201
       --------------------------------------------------------
       Server listening on 5201
       --------------------------------------------------------

 

As you can see the server started to listen on port 5201/TCP.

 

Perform the Test: On the client machine, open a terminal and run the following command:

 

         [root@client ~]#  iperf3 -c <server_ip> -t 60 -p 5201

 

Replace <server_ip> with the IP address of your server.

 

 

Running Basic Tests

Now that you have iperf3 set up, let's perform some basic tests to evaluate your network's performance:

 

Bandwidth Test: To measure the maximum bandwidth between the client and server, run:

 

      # iperf3 -c <server_ip> -t 10 -b 100M

 

This command will run the test for 10 seconds, simulating traffic at a rate of 100 Mbps.

 

Latency Test: To measure the round-trip time (RTT) or latency, run:

 

              # iperf3 -c <server_ip> -t 10 -u -b 1M

 

This command runs a 10-second UDP test with a bandwidth of 1 Mbps.

 

Packet Loss Test: To test for packet loss, run:

 

         #iperf3 -c <server_ip> -t 10 -u -b 100M -l 1400

 

This command sends UDP packets with a size of 1400 bytes at a rate of 100 Mbps for 10 seconds. You can adjust the packet size and duration as needed.

 

 

The best way to find out the bandwidth/throughput that can be achieve between 2 endpoints is to use multiple parallel streams of iperf3 using the ‘-P <n>’ switch, where n is the number of parallel streams. The total bandwidth utilized will be displayed at the end of the test result.

I would also suggest to start with UDP as this would give you a true baseline of the available bandwidth achievable. 

 

Use the following command to get a baseline of bandwidth for  UDP using maximum bitrate. Setting the target bitrate (-b) to 0 will  disable bitrate limits:

            

            # iperf -c <target IP> -P8 -u -b 300M

 

After you figure out the maximum bandwidth achieved with UDP tests, you may follow up with the following to see when the degradation occurs.

  • UDP testing: - iperf3 -c <target IP> -P8 -u -b 100M ; Start with conservative bandwidth

  • UDP testing: - iperf3 -c <target IP> -P8 -u -b 1000M ; Increase it closer to your expected bandwidth.

 

  • TCP testing: - iperf3 -c <target IP> -n 100M -P8 -w 32K ; Start with lower -w flag

  • TCP testing: - iperf3 -c <target IP> -n 100M -P8 -w 128K ; Raise -w flag to assess if throughput scales upwards.

 

 

If TCP based tests do not co-related with UDP based tests and the bandwidth doesn’t scale upwards as you increase the window size, you should consider TCP tuning. But do keep in mind the overheads associated with TCP and hence the UDP performs bit better than TCP.

 

 

You should also make use of other tools like MTR/Traceroute to determine any issues along the network path.


 

Tests for your reference:

 

These tests are run from instances with 1Gbps NIC.

 

Client:

 

[root@client ~]# iperf3 -c <Server IP> -t 60 -p 5201 -V
iperf 3.1.7
Linux client 3.10.0-1160.92.1.el7.x86_64 #1 SMP Tue Jun 20 11:48:01 UTC 2023 x86_64
Control connection MSS 8948
Time: Thu, 03 Aug 2023 11:45:34 GMT
Connecting to host 144.24.213.222, port 5201
      Cookie: client.1691063134.817371.519df38409
      TCP MSS: 8948 (default)
[  4] local 10.0.0.205 port 39504 connected to <Server IP> port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 60 second test
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   128 MBytes  1.07 Gbits/sec    0   3.01 MBytes       
[  4]   1.00-2.00   sec   125 MBytes  1.05 Gbits/sec    0   3.01 MBytes       
[  4]   2.00-3.00   sec   124 MBytes  1.04 Gbits/sec    0   3.01 MBytes       
[..Redacted..]       
[  4]  52.00-53.00  sec  97.5 MBytes   818 Mbits/sec   36   2.92 MBytes       
[  4]  53.00-54.00  sec   125 MBytes  1.05 Gbits/sec    0   3.02 MBytes       
[  4]  54.00-55.00  sec   124 MBytes  1.04 Gbits/sec    0   3.02 MBytes       
[  4]  55.00-56.00  sec   124 MBytes  1.04 Gbits/sec    0   3.02 MBytes       
[  4]  56.00-57.00  sec  93.8 MBytes   786 Mbits/sec   13   2.11 MBytes       
[  4]  57.00-58.00  sec   128 MBytes  1.07 Gbits/sec    5   3.08 MBytes       
[  4]  58.00-59.00  sec   124 MBytes  1.04 Gbits/sec    0   3.08 MBytes       
[  4]  59.00-60.00  sec   125 MBytes  1.05 Gbits/sec    0   3.08 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  6.99 GBytes  1.00 Gbits/sec  289             sender
[  4]   0.00-60.00  sec  6.99 GBytes  1.00 Gbits/sec                  receiver
CPU Utilization: local/sender 2.0% (0.1%u/2.0%s), remote/receiver 6.1% (0.5%u/5.6%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubic
iperf Done.
[root@client ~]#

 

 

Server:

 

[root@server ~]# iperf3 -s 
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from <Client IP>, port 39502
[  5] local 10.20.0.171 port 5201 connected to < ClientIP> port 39504
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   120 MBytes  1.01 Gbits/sec                  
[  5]   1.00-2.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   2.00-3.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   3.00-4.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   4.00-5.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   5.00-6.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   6.00-7.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   7.00-8.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   8.00-9.00   sec   124 MBytes  1.04 Gbits/sec                  
[  5]   9.00-10.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  10.00-11.00  sec  97.1 MBytes   814 Mbits/sec                  
[  5]  11.00-12.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  48.00-49.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  49.00-50.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  50.00-51.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  51.00-52.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  52.00-53.00  sec  97.9 MBytes   821 Mbits/sec                  
[..Redacted..]
[  5]  53.00-54.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  54.00-55.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  55.00-56.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  56.00-57.00  sec  98.3 MBytes   824 Mbits/sec                  
[  5]  57.00-58.00  sec   123 MBytes  1.03 Gbits/sec                  
[  5]  58.00-59.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  59.00-60.00  sec   124 MBytes  1.04 Gbits/sec                  
[  5]  60.00-60.06  sec  7.48 MBytes  1.04 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.06  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.06  sec  6.99 GBytes   999 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
[root@server ~]# 

 

Analyzing Results

 

After running these tests, iperf3 will provide you with detailed results, including throughput, latency, and packet loss metrics. Analyze these results to identify network bottlenecks and areas for improvement.

 

Conclusion

 

Iperf3 is an invaluable tool for network administrators and enthusiasts alike. By regularly conducting network performance tests with iperf3, you can ensure that your network operates at its peak efficiency, identify and resolve issues proactively, and provide a seamless experience for all users. So, start testing your network today and unlock its full potential!




 

 


Thursday, August 31, 2023

TCP Connection states

 

Understanding TCP Connection States: A Fundamental Aspect of Network Communication

In the vast landscape of computer networking, the Transmission Control Protocol (TCP) stands as a cornerstone for reliable and orderly data transmission. At the heart of TCP's functionality lies the concept of connection states, a fundamental aspect that governs how data is exchanged between devices over the network. These connection states form the foundation of reliable communication, enabling seamless data delivery even in the face of network challenges.

Introduction to TCP Connection States

TCP, one of the core protocols of the Internet Protocol Suite, operates at the transport layer and provides reliable, connection-oriented communication between devices. The protocol's ability to establish, maintain, and terminate connections is pivotal for applications that demand data integrity and sequencing, such as web browsing, file transfers, and email communication.

TCP connections undergo a series of well-defined states during their lifecycle, ensuring proper synchronization and error handling. These states are essential to maintaining the integrity of the data being transmitted and enabling efficient recovery from various network issues.

The TCP Connection States

  1. Closed: The initial state of a TCP connection. In this state, no connection exists, and data transfer is not possible.

  2. Listen: A passive open state where a server is waiting for incoming connection requests from clients. The server's socket is configured to listen for incoming connection requests.

  3. Syn-Sent: When a client initiates a connection, it enters the SYN-SENT state. In this state, the client sends a TCP segment with the SYN (synchronize) flag set to the server to request connection establishment.

  4. Syn-Received: Upon receiving a SYN segment from the client, the server enters the SYN-RECEIVED state. The server acknowledges the client's SYN and sends its own SYN segment back to the client.

  5. Established: Once the client receives the SYN-ACK (synchronize-acknowledge) segment from the server, both devices move to the ESTABLISHED state. In this state, data transfer occurs in both directions.

  6. Fin-Wait-1: When one party (either client or server) decides to terminate the connection, it enters the FIN-WAIT-1 state. This state indicates that the party has sent a FIN (finish) segment to signal the intent to close the connection.

  7. Fin-Wait-2: The FIN-WAIT-2 state occurs when the party that initiated the connection termination receives an acknowledgment for its FIN segment.

  8. Close-Wait: If the receiving party decides to close the connection, it enters the CLOSE-WAIT state. In this state, the device is awaiting a signal from the application layer to send a FIN segment and initiate connection termination.

  9. Last-Ack: After sending a FIN segment to the other party, the device enters the LAST-ACK state. It awaits an acknowledgment for the sent FIN before proceeding.

  10. Time-Wait: The TIME-WAIT state occurs after the device has sent an acknowledgment for the other party's FIN segment. This state ensures that any delayed segments are properly handled before the connection is fully terminated.

  11. Closed: Finally, when both parties have completed the connection termination process and acknowledged each other's FIN segments, the connection transitions back to the CLOSED state, ready to be reestablished if necessary.

Significance of Connection States

TCP connection states play a pivotal role in enabling reliable data transmission and recovery from potential network anomalies. By defining a structured sequence of states, TCP ensures that data is transmitted in a controlled manner, with acknowledgment mechanisms that guarantee delivery. Additionally, the states enable graceful connection termination, preventing data loss and ensuring that resources are properly released.

Conclusion

TCP connection states lie at the heart of reliable network communication. These states define the sequence of steps a connection undergoes, from establishment to termination, ensuring data integrity and orderly transmission. Understanding these states is essential for network administrators, developers, and anyone working with networking technologies. With a firm grasp of TCP connection states, professionals can troubleshoot issues, optimize communication, and build robust applications that leverage the power of reliable data transmission over the Internet.

Wednesday, August 30, 2023

MBR to GPT Disk conversion on Linux

Ever wondered how to convert your old MBR based instances to new GPT ones? This will help you to have larger boot volumes and lots of other benefits.


This is the step by step procedure that you can follow, but please note this is not copy and past procedure so you should make sure that the device names, units and other variables are suitable for your environment. When you convert the partition table, you should create new AMI that will have GPT by default so you don't have to repeat the procedure for each instance.
1) For test you can launch two instances (eg: UBUNTU and TEMP). The TEMP instance will be used to attach the root volume from the UBUNTU (original) instance.

Please don't use the same Ubuntu 10.06 AMI (to avoid complications later with attaching two root volumes with the same UUID). You can use Ubuntu 10.04 for TEST and you own AMI for UBUNTU instance.

When the instances are fully launched, stop both instances and detach/attach the root volume from UBUNTU instances to TEMP as /dev/sdf (secondary volume).

Now start the UBUNTU instance, ssh to it and check if the disk are present using "lsblk" tool.

ubuntu@ip-172-31-45-175:~$ sudo su -
root@ip-172-31-45-175:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 3.9T 0 disk
└─xvdf1 202:81 0 2T 0 part

We can see that we have the secondary 4TiB disk (/dev/xvdf) present and that is what we expected.

In order to convert /dev/xvdf from MBR to GPT you have to create new partition table with "parted" or other GPT aware tools. I will use parted in this example and it should be installed on Ubuntu by default.

Note that you will have to create another partition bios_grub that is necessary for GPT partitions and this partition will hold the rest of the grub boot data (basically GRUB stabe 1.5) that can't fit into stage 1.

Enter parted using "parted /dev/xvdf" command and type "print" to check the current partition table. You will notice that the type is "Partition Table: msdos" and we need to change this.

$ sudo parted /dev/xvdf

(parted) unit s
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 16065s 4294967294s 4294951230s primary ext4 boot

Please write down the "Start" sector numbers for the existing partition. We will need this to recreate the GPT partition later.

First step is to create new GPT partition table using "mklabel" command.

(parted) mklabel
New disk label type? Gpt
Warning: The existing disk label on /dev/xvdf will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

Now we should recreate the root partition using the "Start" sector value we wrote in the previous step. This time we will use the 100% of the free space and this way the partition will be 4 TiB in size on next boot.

(parted) mkpart
Partition name? []?
File system type? [ext2]? ext4
Start? 16065s
End? 100%
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 16065s 8388607966s 8388591902s ext4

(parted) quit
Information: You may need to update /etc/fstab.

At this moment we have the root partition recreated but we are still missing bios_grub partition that is necessary for GRUB to work on GPT partition scheme.

(parted) unit s
(parted) mkpart
Partition name? []?
File system type? [ext2]?
Start? 2048
End? 6144
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
2 2048s 6144s 4097s
1 16065s 8388607966s 8388591902s ext4

(parted) quit
Information: You may need to update /etc/fstab.

At this point we have both partition created and ready to proceed.

The next step is to set the proper partition type using "parted set" command. After that we will install grub bootloader since it's not present anymore since we recreated the partition scheme.

$ sudo parted /dev/xvdf set 2 bios_grub on
Information: You may need to update /etc/fstab.

In order to install grub we will need to mount (temporarily) the root volume to /mnt mount point.

$ sudo mount /dev/xvdf1 /mnt/
$ sudo grub-install --boot-directory=/mnt/boot /dev/xvdf

$ sudo umount /mnt

At this point (if you don't have any errors) the process should be completed.

Next step would be to stop the instance, detach the secondary volume and attach it back to the UBUNTU instance as /dev/sda1 (root volume).

The new instance should start with the root volume of 4TiB.

How to analyse FIO output

Storage performance testing plays a vital role in understanding the capabilities of your storage system. When using tools like Fio (Flexible I/O Tester) to evaluate performance, analyzing the generated output is crucial. This guide will walk you through the process of analyzing Fio output to gain insights into your storage system's performance.

Step 1: Run Fio Test: Begin by running the Fio test using your desired configuration file. For instance, if your configuration file is named random-read.fio, execute the following command:

fio random-read.fio

 

Step 2: Understand the Output: Fio provides both human-readable and machine-readable output. While the human-readable output is displayed in the terminal during testing, the machine-readable output is saved in a JSON file. This JSON output is well-suited for detailed analysis.

Step 3: Parsing JSON Output: To extract specific metrics from the JSON output, tools like jq can be helpful. For instance, to retrieve the mean IOPS of read operations, use:

fio random-read.fio --output=result.json jq '.jobs[0].read.iops_mean' result.json

 

Step 4: Metrics to Analyze: Key metrics to analyze include:

  • iops: Input/Output Operations Per Second.
  • bw: Bandwidth in bytes per second.
  • lat: Latency metrics, such as lat_ns for nanosecond latency.
  • slat, clat, and latency percentiles: These provide insights into different latency components.

 

Step 5: Graphing and Visualization: Visualizing metrics over time using tools like Excel or Gnuplot can reveal performance trends. This helps identify potential bottlenecks or improvements in your storage system.

 

Step 6: Comparing Tests: When conducting multiple tests with varying configurations, comparing their outputs can highlight performance differences. This aids in pinpointing optimization opportunities.

 

Step 7: Experimentation and Iteration: Fio offers numerous configuration options. Experiment with different settings to understand how your storage system behaves under various workloads.

In conclusion, effectively analyzing Fio output involves running tests, extracting relevant metrics, visualizing data, and making informed decisions based on your storage system's behavior. By following these steps, you can unlock valuable insights into your storage system's performance and make informed decisions about optimizations and configurations.