Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Wednesday, August 30, 2023

MBR to GPT Disk conversion on Linux

Ever wondered how to convert your old MBR based instances to new GPT ones? This will help you to have larger boot volumes and lots of other benefits.


This is the step by step procedure that you can follow, but please note this is not copy and past procedure so you should make sure that the device names, units and other variables are suitable for your environment. When you convert the partition table, you should create new AMI that will have GPT by default so you don't have to repeat the procedure for each instance.
1) For test you can launch two instances (eg: UBUNTU and TEMP). The TEMP instance will be used to attach the root volume from the UBUNTU (original) instance.

Please don't use the same Ubuntu 10.06 AMI (to avoid complications later with attaching two root volumes with the same UUID). You can use Ubuntu 10.04 for TEST and you own AMI for UBUNTU instance.

When the instances are fully launched, stop both instances and detach/attach the root volume from UBUNTU instances to TEMP as /dev/sdf (secondary volume).

Now start the UBUNTU instance, ssh to it and check if the disk are present using "lsblk" tool.

ubuntu@ip-172-31-45-175:~$ sudo su -
root@ip-172-31-45-175:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 3.9T 0 disk
└─xvdf1 202:81 0 2T 0 part

We can see that we have the secondary 4TiB disk (/dev/xvdf) present and that is what we expected.

In order to convert /dev/xvdf from MBR to GPT you have to create new partition table with "parted" or other GPT aware tools. I will use parted in this example and it should be installed on Ubuntu by default.

Note that you will have to create another partition bios_grub that is necessary for GPT partitions and this partition will hold the rest of the grub boot data (basically GRUB stabe 1.5) that can't fit into stage 1.

Enter parted using "parted /dev/xvdf" command and type "print" to check the current partition table. You will notice that the type is "Partition Table: msdos" and we need to change this.

$ sudo parted /dev/xvdf

(parted) unit s
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 16065s 4294967294s 4294951230s primary ext4 boot

Please write down the "Start" sector numbers for the existing partition. We will need this to recreate the GPT partition later.

First step is to create new GPT partition table using "mklabel" command.

(parted) mklabel
New disk label type? Gpt
Warning: The existing disk label on /dev/xvdf will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags

Now we should recreate the root partition using the "Start" sector value we wrote in the previous step. This time we will use the 100% of the free space and this way the partition will be 4 TiB in size on next boot.

(parted) mkpart
Partition name? []?
File system type? [ext2]? ext4
Start? 16065s
End? 100%
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 16065s 8388607966s 8388591902s ext4

(parted) quit
Information: You may need to update /etc/fstab.

At this moment we have the root partition recreated but we are still missing bios_grub partition that is necessary for GRUB to work on GPT partition scheme.

(parted) unit s
(parted) mkpart
Partition name? []?
File system type? [ext2]?
Start? 2048
End? 6144
(parted) print
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 8388608000s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
2 2048s 6144s 4097s
1 16065s 8388607966s 8388591902s ext4

(parted) quit
Information: You may need to update /etc/fstab.

At this point we have both partition created and ready to proceed.

The next step is to set the proper partition type using "parted set" command. After that we will install grub bootloader since it's not present anymore since we recreated the partition scheme.

$ sudo parted /dev/xvdf set 2 bios_grub on
Information: You may need to update /etc/fstab.

In order to install grub we will need to mount (temporarily) the root volume to /mnt mount point.

$ sudo mount /dev/xvdf1 /mnt/
$ sudo grub-install --boot-directory=/mnt/boot /dev/xvdf

$ sudo umount /mnt

At this point (if you don't have any errors) the process should be completed.

Next step would be to stop the instance, detach the secondary volume and attach it back to the UBUNTU instance as /dev/sda1 (root volume).

The new instance should start with the root volume of 4TiB.

Create Desktop Environment in Suse Linux on AWS

Having a Desktop environment on a Cloud Instance is helpful in many ways. You can troubleshoot application connectivity, take proper HAR files and so on. Even having a desktop is cool!

Here is how you can install GNOME on any SUSE Linux instances in any Cloud Environments. Remember, once you install GNOME (or KDE or any desktop environment as a matter of fact), you need to use VNC to connect to it.

The same steps can be used on any Cloud environments like Oracle Cloud (OCI), AWS, Azure, GCP and so on.

 

Requirements:

- SSH client that allows X11 forwarding
- TightVNC Server and Client

* Here are the steps I took to install GNOME desktop:

1. ssh into the instance with root username
2. type 'yast2' to get into YaST2 Control Center
3. Select "Software" on the left side bar, select "Online Update" on the right side bar, and then hint Enter key. This step is to update the repository of the system
4. Select "Software" on the left side bar, select "software Management" on the right side bar, and then hint Enter key.
5. In the "Search Phrase" textbox in the Filter Search session, type "gnome", and then hint Enter key
6. Install everything that listed on the right side bar, if the error page about "Package Dependencies"pops up, select the first option under "possible solutions", and then click "OK -- Try Again"
7. Select "Accept" on the bottom right of the page, hint Enter key. It will install all the packages you selected.
8. After installing the packages, click "F9" key twice to exit out YaST2 Control Center

Here are the steps to install and configure VNCServer:

1. Open TCP port 5901 in the security group that the instance belongs.
2. In the instance, type "zypper install vnc"
3. After installing VNCServer, type "vncpasswd" to set the access password
4. type "vncserver :1" to start a vnc session
5. sudo vim /root/.vnc/xstartup
6. comment out the "twm &" by typing # in front of the phrase, and then add "/usr/bin/gnome &" to the next line
7. save and exit out the xstartup file
8. type "vncserver -kill :1"
9. type "vncserver :1" to start a new session to load the modified xstartup file
10. In your local host, download and install tightvnc: http://www.tightvnc.com/download.php
11. Open "TightVNC Viewer"
12. For the Remote Host, type your DNS for the instance, and then add "::5901" at the end of the line
13. Click "Connect"
14. Type your password you set by vncpasswd
15. Now you can access to your instance via VNC connection
 
Hope this helps.

 

How to do faster copy and delete operations on EFS file systems

 

Issue: How to do faster copy and delete operations on EFS file systems.


Environment:

Amazon Linux
Ubuntu Server
Amazon EFS


Solution:


To optimize copy and delete operations on EFS file systems, you can use the GNU Parallel shell tool for executing jobs in parallel. By doing this you will be able to complete these tasks faster than using the normal serial method.


1.a. Install the NFS utilities and the GNU parallel package on Amazon Linux.

[ec2-user ~]$ sudo yum install nfs-utils -y
[ec2-user ~]$ sudo yum install parallel
 

1.b. Install the NFS utilities and the GNU parallel package on Ubuntu Server.
[ubuntu ~]$ sudo apt-get install nfs-common parallel -y


1.c. Install from source:

[ec2-user ~]$ cd /tmp; wget http://ftp.gnu.org/gnu/parallel/parallel-latest.tar.bz2
[ec2-user ~]$ tar -xvf parallel-20120122.tar.bz2; cd parallel-20170822
[ec2-user ~]$ sudo yum groupinstall 'development tools' -y
[ec2-user ~]$ make; ./configure && sudo make install


2. Create a temporary directory and mount the EFS filesystem.
[ec2-user ~]$ sudo mkdir /mnt/efs; sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-XXXXXXXX.efs.REGION.amazonaws.com:/ /mnt/efs


3. Create ten thousand small files local in your instance.
[ec2-user ~]$ mkdir /tmp/efs; for each in $(seq 1 10000); do SUFFIX=$(mktemp -u _XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX); sudo dd if=/dev/zero of=/tmp/efs/${SUFFIX} bs=64k count=1; done


4. Copy the files from your instance to the EFS file system using the parallel method.
[ec2-user ~]$ cd /tmp/efs; find -L . -maxdepth 1 -type f | sudo parallel rsync -avR {} /mnt/efs/


5. Delete the files from the EFS file system using the parallel method.
[ec2-user ~]$ cd /mnt/efs; find -L . -maxdepth 1 -type f | sudo parallel rm -rfv {}

 

Test:
The following output is from my tests, using an EFS file system (General Purpose) mounted on a t2.micro instance type:


1. Copy ten thousand files from my EC2 instance to my EFS file system, using the normal serial method.
[ec2-user ~]$ cd /tmp/efs; time sudo find -L . -maxdepth 1 -type f -exec rsync -avR '{}' /mnt/efs/ \;

real 20m8.947s
user 0m0.060s
sys 0m0.980s

2. Copy ten thousand files from my EC2 instance to my EFS file system, using the parallel method.
[ec2-user ~]$ cd /tmp/efs; time find -L . -maxdepth 1 -type f | sudo parallel rsync -avR {} /mnt/efs/
real 5m34.264s
user 0m8.308s
sys 0m6.904s


3. Delete ten thousand files from my EFS file system, using the normal serial method.
[ec2-user ~]$ cd /mnt/efs; time sudo find -L . -maxdepth 1 -exec rm -rfv {} \;
real 2m24.799s
user 0m0.124s
sys 0m1.240s


4. Delete ten thousand files from my EFS file system, using the parallel method.
[ec2-user ~]$ cd /mnt/efs; find -L . -maxdepth 1 -type f | sudo parallel rm -rfv {}
real 1m55.153s
user 0m7.988s
sys 0m6.972s

Recursively copy :
To add to this amazing article as most us are using this as a sample for our customers I would like to mention that the examples above will copy all files from SRC only, not recursively into /SRC. If you need recursively copy you will have two option with rsync:

Loop:
find /SRC/ -type d | while read -r c; do cd "$c"; find -L . -maxdepth 1 -type f | parallel rsync --avR {} /DST; done

As the above allows you the possibility to run parallel copy and recursively copy the data from SRC/ to DST it also introduces a performance penalty as the loop have to read to each folder recursively and fire parallel copies only of the content of that location.

List Creation:

Create List
rsync -avR --dry-run /SRC /DST > lsit.log
 

Run the command:
cat list.log | parallel --will-cite -j 100 rsync -avR {} /DST/


The above is a much simpler approach, what basically do is to create a list of all files/folders recursively on /SRC and fire 100 parallel copies reading the path of the files to copy from the list. This allows the copies to be much more efficient as having less over the head.


[2] http://www.gnu.org/software/parallel/man.html#EXAMPLE:-Parallelizing-rsync

Block Volume Performance calculation

In the realm of modern computing, where data storage and retrieval speed are paramount, understanding the performance of storage solutions is crucial. One of the fundamental components of this landscape is Linux block volume performance calculation. Whether you're a system administrator, a developer, or an enthusiast, delving into the intricacies of block volume performance, including Fio-based tests, can empower you to make informed decisions about storage setups. In this blog post, we'll demystify the concepts behind Linux block volume performance calculation and explore the key factors that influence it, along with practical Fio-based tests.
 

Understanding Block Volumes:
Block volumes are a type of storage solution commonly used in modern IT infrastructures. They provide raw storage space that can be partitioned and formatted according to the user's needs. These volumes are often found in virtual machines, cloud instances, and even physical servers. They are characterized by their ability to handle data at the block level, meaning data is read from and written to storage in fixed-size blocks.
 

Factors Influencing Block Volume Performance:
Several factors play a pivotal role in determining the performance of Linux block volumes. Understanding these factors helps optimize storage systems for better efficiency and responsiveness.

1. I/O Operations Per Second (IOPS): IOPS refers to the number of input/output operations a storage device can handle in a second. It is a key metric in assessing storage responsiveness. The higher the IOPS, the faster the storage system can read from or write to the block volume.

2. Throughput: Throughput measures the amount of data that can be transferred between the storage device and the system in a given period. It's usually measured in megabytes or gigabytes per second. Throughput is a crucial metric when dealing with large data transfers.

3. Latency: Latency is the delay between initiating a data request and receiving the first byte of data. Lower latency indicates a more responsive storage system. Excessive latency can lead to delays in data-intensive operations.

4. Queue Depth: Queue depth refers to the number of I/O requests that can be in the queue to the storage device at a given time. A higher queue depth can lead to improved performance, especially in scenarios with concurrent I/O operations.


Calculating Block Volume Performance:
While calculating precise block volume performance can be intricate, here's a simplified approach:

1. IOPS Calculation: Determine the total IOPS required by considering the application's read and write demands. Divide this total by the number of block volumes to distribute the load. It's important to consider peak I/O requirements.

2. Throughput Calculation: Calculate the required throughput by estimating the data transfer needs of the application. Divide this by the number of block volumes for load distribution.

3. Latency Estimation: Latency is affected by various factors, including the speed of the storage media and the efficiency of the underlying technology. Faster media and optimized configurations lead to lower latency.

Thursday, April 4, 2019

Automatically encrypt Ephemeral volumes on AWS EC2 instances

Encrypting Data at rest if mandatory requirement compliance regulations such as PCI DSS and HIPAA. EBS, S3 has an option to encrypt the data stored in it. However, Instance Store volumes which provides temporary block level storage does have an option to encrypt the data stored in it. Customers need to use configure encryption using tools like dm-crypt.

Lets see how to automate encrypting the ephemeral volumes for Instance Store volumes on EC2 instances. I have written a user-data script which takes care of all the encryption setups.

The script will do the below to make it seamless for the customer:

- When you launch an instance or change the instance type, the script automatically detects the Ephemeral disks available to the instance type, setup encryption and make them available.

- Automatically installs the required packages for encryption if its not installed.

- The script also takes care of encryption in case of instance reboot or stop/start.

- The user-data script automatically take care of Stop/Start, Reboots and instance re-size.

- After stop/start you will lose all the data and configuration on the ephemeral volumes. This is expected, the user-data script will automatically detect this and setup encryption  on the new disks and mounts it.

- When the instance is rebooted, the data persists. On a reboot we just need to decrypt the filesystem and mount it.

- The user data script will encrypt the volume and create a filesystem if its not already encrypted (These actions will be taken if you launch a fresh instance with this user-data or when you do a stop & start of your instance).

Now, if the volumes are already encrypted (this will happen on an instance reboot), the script will simply initialize the encrypted volume and mount it on the respective directory.

- The script will encrypt all the ephemeral volumes, create EXT4 filesystem and mount them under /encrypted_X (X being the last letter of the device name of the NVMe device name).
You may change the filesystem type, mount point as per your requirement, but you will need to update the logic to map volumes to correct mount point on the script.

- The encryption key/passphrase used is encrypted using AWS KMS and kept in a file in your S3 bucket. This is a one time setup and you could use the same encrypted passphrase file for your future installations. Instance need to be attached with an IAM instance profile with permission to download files from the S3 bucket.









Here are the step by step instructions:
1. Create an S3 Bucket for storing encrypted password file (or use an existing bucket in your account).

2. Create a IAM policy with the below policy document, this policy is going to be used for instance profile to allow it download the password file from s3 bucket:

Sign in to the AWS Management Console and navigate to the IAM console: https://console.aws.amazon.com/iam/home?region=us-east-1#/home
In the navigation pane, choose Policies, choose Create Policy, select Create Your Own Policy, name and describe the policy, and paste the following policy. Choose Create Policy.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1478729875000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::
/LuksInternalStorageKey"
            ]
        }
    ]
}

 


---> Replace with the one you created/selected in step 1.



3. Create an EC2 role with the above policy.

In the IAM console, choose Roles, and then choose Create New Role.
In Step 1: Role Name, type your role name, and choose Next Step.
In Step 2: Select Role Type, choose Amazon EC2 and choose Next Step.
In Step 3: Established Trust, choose Next Step.
In Step 4: Attach Policy, choose the policy you created in Step 1

4. Create a KMS Key and add usage permission for the role you created above from the KMS console: https://console.aws.amazon.com/iam/home?region=us-east-1#/encryptionKeys/us-east-1
Step 1: Click ‘Create key’.
Step 2: Provide an Alias and Description.
Step 3: Add IAM role you created above as Key user under - ‘Define Key Usage Permissions’.

5. Create a secret password, encrypt it with KMS and store it in S3.

# aws --region us-east-1 kms encrypt --key-id 'alias/EncFSForEC2InternalStorageKey' --plaintext "ThisIs-a-SecretPassword" --query CiphertextBlob --output text | base64 --decode > LuksInternalStorageKey

--> Replace 'alias/EncFSForEC2InternalStorageKey' with the id of the KMS key you created above. The IAM user as which you run the above command should have permission to access the KMS key as well.
—> You should also replace the string “ThisIs-a-SecretPassword" with some strong passphrase.

6. Upload the encrypted key file to S3.

# aws s3 cp LuksInternalStorageKey s3:///LuksInternalStorageKey

7. Add the below userdata script to your instance at launch time. You can add/modify the user-data on a stopped instance as well.

I3, F1 instance type provides NVMe based instance store volumes and those disks are detected as /dev/nvmeXn1, X being the NVMe device number. Other instance types provides HDD/SSD based instance store volumes which gets detected as /dev/sdX. The user-data script detect the device names from EC2 meta-data in case of HDD/SSD and using nvme tool in case of NVMe based disks. I have written separate user data script for these volume types.

Use - user-data_NVMe_InstanceStore.txt for I3, F1 instances with NVMe disks.
Use - user-data_InstanceStore.txt for other instance types.

8. Once the instance is launched, you need to update the cloud-init configuration to make sure the user data runs every time the instance boots.

On Amazon Linux AMI based instances:
# sed -i 's/scripts-user/[scripts-user, always]/' /etc/cloud/cloud.cfg.d/00_defaults.cfg

On RHEL/CentOS instances:
# sed -i 's/scripts-user/[scripts-user, always]/' /etc/cloud/cloud.cfg


9. On Amazon Linux AMI instances, you might need disable crypt module on boot time to make sure the encrypted filesystems are not detected on the early boot stages:
# sed -i 's/#omit_dracutmodules+=""/omit_dracutmodules+="crypt"/' /etc/dracut.conf
# dracut --force

 


User Data Scripts:

-- user-data_InstanceStore.txt
#!/bin/bash

REGION=
S3_Bucket=

# Install required packages if not installed
[ $(which unzip) ] || yum -y install unzip
[ $(which aws) ] || "$(/usr/bin/curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" ; /usr/bin/unzip -o awscli-bundle.zip; ./awscli-bundle/install -b /usr/bin/aws ; rm -rf /awscli-bundle*)"
[ $(which cryptsetup) ] || yum install -y cryptsetup

# Get the encrypted password file from s3
/usr/bin/aws s3 cp s3://${S3_Bucket}/LuksInternalStorageKey .
# Decrypt and store the passphrase in a variable
LuksClearTextKey=$(/usr/bin/aws --region {REGION} kms decrypt --ciphertext-blob fileb://LuksInternalStorageKey --output text --query Plaintext | base64 --decode)

for ephemeral in $(curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/ |grep ephemeral)
do
DEV=$(curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/$ephemeral |sed 's/s/\/dev\/xv/g')
DEV_1=`echo "${DEV: -1}"`

[ "$(/bin/mount | grep -i ${DEV})" ] && /bin/umount ${DEV}

TYPE=`/usr/bin/file -sL ${DEV} | awk '{print $2}'`

if [ $TYPE == "LUKS" ]
then
    # Open and initialize the encryped volume
    /bin/echo "$LuksClearTextKey" | /sbin/cryptsetup luksOpen ${DEV} encfs_${DEV_1}
    # Check and create mount point if not exists
    [ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
    # Mount the filsystem
    /bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
else
     # Encrypt the volume sub
     /bin/echo "$LuksClearTextKey" | cryptsetup -y luksFormat ${DEV}
     # Open and initialize the encryped volume
    /bin/echo "$LuksClearTextKey" | cryptsetup luksOpen ${DEV} encfs_${DEV_1}
    # create a filesystem on the encrypted volume, mount it on the required directory
    /sbin/mkfs.ext4 /dev/mapper/encfs_${DEV_1}
    [ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
    /bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
fi
done
# Unset the passphrase variable and remove the encrypted password file.
unset LuksInternalStorageKey
rm LuksInternalStorageKey
-- user-data_NVMe_InstanceStore.txt
#!/bin/bash

#set -x

REGION=
S3_Bucket=

# Install required packages if not installed
[ $(which unzip) ] || yum install -y zip unzip
[ $(which python) ] || yum install -y python
[ $(which aws) ] || "$(/usr/bin/curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" ; /usr/bin/unzip -o awscli-bundle.zip; ./awscli-bundle/install -b /usr/bin/aws ; rm -rf /awscli-bundle*)"
[ "$(which cryptsetup)" ] || yum install -y cryptsetup
[ "$(which nvme)" ] || yum install -y nvme-cli

# Get the encrypted password file from s3
/usr/bin/aws s3 cp s3://${S3_Bucket}/LuksInternalStorageKey .
# Decrypt and store the passphrase in a variable
LuksClearTextKey=$(/usr/bin/aws --region ${REGION} kms decrypt --ciphertext-blob fileb://LuksInternalStorageKey --output text --query Plaintext | base64 --decode)

for ephemeral in $(nvme list | grep dev | awk {'print $1'})
do
DEV_1=$(echo "${ephemeral:9:1}")

[ "$(/bin/mount | grep -i ${ephemeral})" ] && /bin/umount ${ephemeral}

TYPE=`/usr/bin/file -sL ${ephemeral} | awk '{print $2}'`

if [ $TYPE == "LUKS" ]
then
    # Open and initialize the encryped volume
    /bin/echo "$LuksClearTextKey" | /sbin/cryptsetup luksOpen ${ephemeral} encfs_${DEV_1}
    # Check and create mount point if not exists
    [ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
    # Mount the filsystem
    /bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
else
     # Encrypt the volume sub
     /bin/echo "$LuksClearTextKey" | cryptsetup -y luksFormat ${ephemeral}
     # Open and initialize the encryped volume
    /bin/echo "$LuksClearTextKey" | cryptsetup luksOpen ${ephemeral} encfs_${DEV_1}
    # create a filesystem on the encrypted volume, mount it on the required directory
    /sbin/mkfs.ext4 /dev/mapper/encfs_${DEV_1}
    [ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
    /bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
fi
done
# Unset the passphrase variable and remove the encrypted password file.
unset LuksInternalStorageKey
rm LuksInternalStorageKey

Regards,
Jay