Thursday, August 31, 2023

TCP Connection states

 

Understanding TCP Connection States: A Fundamental Aspect of Network Communication

In the vast landscape of computer networking, the Transmission Control Protocol (TCP) stands as a cornerstone for reliable and orderly data transmission. At the heart of TCP's functionality lies the concept of connection states, a fundamental aspect that governs how data is exchanged between devices over the network. These connection states form the foundation of reliable communication, enabling seamless data delivery even in the face of network challenges.

Introduction to TCP Connection States

TCP, one of the core protocols of the Internet Protocol Suite, operates at the transport layer and provides reliable, connection-oriented communication between devices. The protocol's ability to establish, maintain, and terminate connections is pivotal for applications that demand data integrity and sequencing, such as web browsing, file transfers, and email communication.

TCP connections undergo a series of well-defined states during their lifecycle, ensuring proper synchronization and error handling. These states are essential to maintaining the integrity of the data being transmitted and enabling efficient recovery from various network issues.

The TCP Connection States

  1. Closed: The initial state of a TCP connection. In this state, no connection exists, and data transfer is not possible.

  2. Listen: A passive open state where a server is waiting for incoming connection requests from clients. The server's socket is configured to listen for incoming connection requests.

  3. Syn-Sent: When a client initiates a connection, it enters the SYN-SENT state. In this state, the client sends a TCP segment with the SYN (synchronize) flag set to the server to request connection establishment.

  4. Syn-Received: Upon receiving a SYN segment from the client, the server enters the SYN-RECEIVED state. The server acknowledges the client's SYN and sends its own SYN segment back to the client.

  5. Established: Once the client receives the SYN-ACK (synchronize-acknowledge) segment from the server, both devices move to the ESTABLISHED state. In this state, data transfer occurs in both directions.

  6. Fin-Wait-1: When one party (either client or server) decides to terminate the connection, it enters the FIN-WAIT-1 state. This state indicates that the party has sent a FIN (finish) segment to signal the intent to close the connection.

  7. Fin-Wait-2: The FIN-WAIT-2 state occurs when the party that initiated the connection termination receives an acknowledgment for its FIN segment.

  8. Close-Wait: If the receiving party decides to close the connection, it enters the CLOSE-WAIT state. In this state, the device is awaiting a signal from the application layer to send a FIN segment and initiate connection termination.

  9. Last-Ack: After sending a FIN segment to the other party, the device enters the LAST-ACK state. It awaits an acknowledgment for the sent FIN before proceeding.

  10. Time-Wait: The TIME-WAIT state occurs after the device has sent an acknowledgment for the other party's FIN segment. This state ensures that any delayed segments are properly handled before the connection is fully terminated.

  11. Closed: Finally, when both parties have completed the connection termination process and acknowledged each other's FIN segments, the connection transitions back to the CLOSED state, ready to be reestablished if necessary.

Significance of Connection States

TCP connection states play a pivotal role in enabling reliable data transmission and recovery from potential network anomalies. By defining a structured sequence of states, TCP ensures that data is transmitted in a controlled manner, with acknowledgment mechanisms that guarantee delivery. Additionally, the states enable graceful connection termination, preventing data loss and ensuring that resources are properly released.

Conclusion

TCP connection states lie at the heart of reliable network communication. These states define the sequence of steps a connection undergoes, from establishment to termination, ensuring data integrity and orderly transmission. Understanding these states is essential for network administrators, developers, and anyone working with networking technologies. With a firm grasp of TCP connection states, professionals can troubleshoot issues, optimize communication, and build robust applications that leverage the power of reliable data transmission over the Internet.

Wednesday, August 30, 2023

MBR to GPT Disk conversion on Linux

Ever wondered how to convert your old MBR based instances to new GPT ones? This will help you to have larger boot volumes and lots of other benefits.


This is the step by step procedure that you can follow, but please note this is not copy and past procedure so you should make sure that the device names, units and other variables are suitable for your environment. When you convert the partition table, you should create new AMI that will have GPT by default so you don't have to repeat the procedure for each instance.
1) For test you can launch two instances (eg: UBUNTU and TEMP). The TEMP instance will be used to attach the root volume from the UBUNTU (original) instance.

Please don't use the same Ubuntu 10.06 AMI (to avoid complications later with attaching two root volumes with the same UUID). You can use Ubuntu 10.04 for TEST and you own AMI for UBUNTU instance.

When the instances are fully launched, stop both instances and detach/attach the root volume from UBUNTU instances to TEMP as /dev/sdf (secondary volume).

Now start the UBUNTU instance, ssh to it and check if the disk are present using "lsblk" tool.

ubuntu@ip-172-31-45-175:~$ sudo su -
root@ip-172-31-45-175:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 3.9T 0 disk
└─xvdf1 202:81 0 2T 0 part

We can see that we have the secondary 4TiB disk (/dev/xvdf) present and that is what we expected.

In order to convert /dev/xvdf from MBR to GPT you have to create new partition table with "parted" or other GPT aware tools. I will use parted in this example and it should be installed on Ubuntu by default.

Note that you will have to create another partition bios_grub that is necessary for GPT partitions and this partition will hold the rest of the grub boot data (basically GRUB stabe 1.5) that can't fit into stage 1.

Enter parted using "parted /dev/xvdf" command and type "print" to check the current partition table. You will notice that the type is "Partition Table: msdos" and we need to change this.

$ sudo parted /dev/xvdf

(parted) unit s
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 16065s 4294967294s 4294951230s primary ext4 boot

Please write down the "Start" sector numbers for the existing partition. We will need this to recreate the GPT partition later.

First step is to create new GPT partition table using "mklabel" command.

(parted) mklabel
New disk label type? Gpt
Warning: The existing disk label on /dev/xvdf will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

Now we should recreate the root partition using the "Start" sector value we wrote in the previous step. This time we will use the 100% of the free space and this way the partition will be 4 TiB in size on next boot.

(parted) mkpart
Partition name? []?
File system type? [ext2]? ext4
Start? 16065s
End? 100%
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 16065s 8388607966s 8388591902s ext4

(parted) quit
Information: You may need to update /etc/fstab.

At this moment we have the root partition recreated but we are still missing bios_grub partition that is necessary for GRUB to work on GPT partition scheme.

(parted) unit s
(parted) mkpart
Partition name? []?
File system type? [ext2]?
Start? 2048
End? 6144
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
2 2048s 6144s 4097s
1 16065s 8388607966s 8388591902s ext4

(parted) quit
Information: You may need to update /etc/fstab.

At this point we have both partition created and ready to proceed.

The next step is to set the proper partition type using "parted set" command. After that we will install grub bootloader since it's not present anymore since we recreated the partition scheme.

$ sudo parted /dev/xvdf set 2 bios_grub on
Information: You may need to update /etc/fstab.

In order to install grub we will need to mount (temporarily) the root volume to /mnt mount point.

$ sudo mount /dev/xvdf1 /mnt/
$ sudo grub-install --boot-directory=/mnt/boot /dev/xvdf

$ sudo umount /mnt

At this point (if you don't have any errors) the process should be completed.

Next step would be to stop the instance, detach the secondary volume and attach it back to the UBUNTU instance as /dev/sda1 (root volume).

The new instance should start with the root volume of 4TiB.

How to analyse FIO output

Storage performance testing plays a vital role in understanding the capabilities of your storage system. When using tools like Fio (Flexible I/O Tester) to evaluate performance, analyzing the generated output is crucial. This guide will walk you through the process of analyzing Fio output to gain insights into your storage system's performance.

Step 1: Run Fio Test: Begin by running the Fio test using your desired configuration file. For instance, if your configuration file is named random-read.fio, execute the following command:

fio random-read.fio

 

Step 2: Understand the Output: Fio provides both human-readable and machine-readable output. While the human-readable output is displayed in the terminal during testing, the machine-readable output is saved in a JSON file. This JSON output is well-suited for detailed analysis.

Step 3: Parsing JSON Output: To extract specific metrics from the JSON output, tools like jq can be helpful. For instance, to retrieve the mean IOPS of read operations, use:

fio random-read.fio --output=result.json jq '.jobs[0].read.iops_mean' result.json

 

Step 4: Metrics to Analyze: Key metrics to analyze include:

  • iops: Input/Output Operations Per Second.
  • bw: Bandwidth in bytes per second.
  • lat: Latency metrics, such as lat_ns for nanosecond latency.
  • slat, clat, and latency percentiles: These provide insights into different latency components.

 

Step 5: Graphing and Visualization: Visualizing metrics over time using tools like Excel or Gnuplot can reveal performance trends. This helps identify potential bottlenecks or improvements in your storage system.

 

Step 6: Comparing Tests: When conducting multiple tests with varying configurations, comparing their outputs can highlight performance differences. This aids in pinpointing optimization opportunities.

 

Step 7: Experimentation and Iteration: Fio offers numerous configuration options. Experiment with different settings to understand how your storage system behaves under various workloads.

In conclusion, effectively analyzing Fio output involves running tests, extracting relevant metrics, visualizing data, and making informed decisions based on your storage system's behavior. By following these steps, you can unlock valuable insights into your storage system's performance and make informed decisions about optimizations and configurations.

Create Desktop Environment in Suse Linux on AWS

Having a Desktop environment on a Cloud Instance is helpful in many ways. You can troubleshoot application connectivity, take proper HAR files and so on. Even having a desktop is cool!

Here is how you can install GNOME on any SUSE Linux instances in any Cloud Environments. Remember, once you install GNOME (or KDE or any desktop environment as a matter of fact), you need to use VNC to connect to it.

The same steps can be used on any Cloud environments like Oracle Cloud (OCI), AWS, Azure, GCP and so on.

 

Requirements:

- SSH client that allows X11 forwarding
- TightVNC Server and Client

* Here are the steps I took to install GNOME desktop:

1. ssh into the instance with root username
2. type 'yast2' to get into YaST2 Control Center
3. Select "Software" on the left side bar, select "Online Update" on the right side bar, and then hint Enter key. This step is to update the repository of the system
4. Select "Software" on the left side bar, select "software Management" on the right side bar, and then hint Enter key.
5. In the "Search Phrase" textbox in the Filter Search session, type "gnome", and then hint Enter key
6. Install everything that listed on the right side bar, if the error page about "Package Dependencies"pops up, select the first option under "possible solutions", and then click "OK -- Try Again"
7. Select "Accept" on the bottom right of the page, hint Enter key. It will install all the packages you selected.
8. After installing the packages, click "F9" key twice to exit out YaST2 Control Center

Here are the steps to install and configure VNCServer:

1. Open TCP port 5901 in the security group that the instance belongs.
2. In the instance, type "zypper install vnc"
3. After installing VNCServer, type "vncpasswd" to set the access password
4. type "vncserver :1" to start a vnc session
5. sudo vim /root/.vnc/xstartup
6. comment out the "twm &" by typing # in front of the phrase, and then add "/usr/bin/gnome &" to the next line
7. save and exit out the xstartup file
8. type "vncserver -kill :1"
9. type "vncserver :1" to start a new session to load the modified xstartup file
10. In your local host, download and install tightvnc: http://www.tightvnc.com/download.php
11. Open "TightVNC Viewer"
12. For the Remote Host, type your DNS for the instance, and then add "::5901" at the end of the line
13. Click "Connect"
14. Type your password you set by vncpasswd
15. Now you can access to your instance via VNC connection
 
Hope this helps.