Ever wondered how to convert your old MBR based instances to new GPT ones? This will help you to have larger boot volumes and lots of other benefits.
Wednesday, August 30, 2023
MBR to GPT Disk conversion on Linux
Create Desktop Environment in Suse Linux on AWS
Having a Desktop environment on a Cloud Instance is helpful in many ways. You can troubleshoot application connectivity, take proper HAR files and so on. Even having a desktop is cool!
Here is how you can install GNOME on any SUSE Linux instances in any Cloud Environments. Remember, once you install GNOME (or KDE or any desktop environment as a matter of fact), you need to use VNC to connect to it.
The same steps can be used on any Cloud environments like Oracle Cloud (OCI), AWS, Azure, GCP and so on.
How to do faster copy and delete operations on EFS file systems
Issue: How to do faster copy and delete operations on EFS file systems.
Environment:
Amazon Linux
Ubuntu Server
Amazon EFS
Solution:
To optimize copy and delete operations on EFS file systems, you can use the GNU Parallel shell tool for executing jobs in parallel. By doing this you will be able to complete these tasks faster than using the normal serial method.
1.a. Install the NFS utilities and the GNU parallel package on Amazon Linux.
[ec2-user ~]$ sudo yum install nfs-utils -y
[ec2-user ~]$ sudo yum install parallel
1.b. Install the NFS utilities and the GNU parallel package on Ubuntu Server.
[ubuntu ~]$ sudo apt-get install nfs-common parallel -y
1.c. Install from source:
[ec2-user ~]$ cd /tmp; wget http://ftp.gnu.org/gnu/parallel/parallel-latest.tar.bz2
[ec2-user ~]$ tar -xvf parallel-20120122.tar.bz2; cd parallel-20170822
[ec2-user ~]$ sudo yum groupinstall 'development tools' -y
[ec2-user ~]$ make; ./configure && sudo make install
2. Create a temporary directory and mount the EFS filesystem.
[ec2-user ~]$ sudo mkdir /mnt/efs; sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-XXXXXXXX.efs.REGION.amazonaws.com:/ /mnt/efs
3. Create ten thousand small files local in your instance.
[ec2-user ~]$ mkdir /tmp/efs; for each in $(seq 1 10000); do SUFFIX=$(mktemp -u _XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX); sudo dd if=/dev/zero of=/tmp/efs/${SUFFIX} bs=64k count=1; done
4. Copy the files from your instance to the EFS file system using the parallel method.
[ec2-user ~]$ cd /tmp/efs; find -L . -maxdepth 1 -type f | sudo parallel rsync -avR {} /mnt/efs/
5. Delete the files from the EFS file system using the parallel method.
[ec2-user ~]$ cd /mnt/efs; find -L . -maxdepth 1 -type f | sudo parallel rm -rfv {}
Test:
The following output is from my tests, using an EFS file system (General Purpose) mounted on a t2.micro instance type:
1. Copy ten thousand files from my EC2 instance to my EFS file system, using the normal serial method.
[ec2-user ~]$ cd /tmp/efs; time sudo find -L . -maxdepth 1 -type f -exec rsync -avR '{}' /mnt/efs/ \;
real 20m8.947s
user 0m0.060s
sys 0m0.980s
2. Copy ten thousand files from my EC2 instance to my EFS file system, using the parallel method.
[ec2-user ~]$ cd /tmp/efs; time find -L . -maxdepth 1 -type f | sudo parallel rsync -avR {} /mnt/efs/
real 5m34.264s
user 0m8.308s
sys 0m6.904s
3. Delete ten thousand files from my EFS file system, using the normal serial method.
[ec2-user ~]$ cd /mnt/efs; time sudo find -L . -maxdepth 1 -exec rm -rfv {} \;
real 2m24.799s
user 0m0.124s
sys 0m1.240s
4. Delete ten thousand files from my EFS file system, using the parallel method.
[ec2-user ~]$ cd /mnt/efs; find -L . -maxdepth 1 -type f | sudo parallel rm -rfv {}
real 1m55.153s
user 0m7.988s
sys 0m6.972s
Recursively copy :
To add to this amazing article as most us are using this as a sample for our customers I would like to mention that the examples above will copy all files from SRC only, not recursively into /SRC. If you need recursively copy you will have two option with rsync:
Loop:
find /SRC/ -type d | while read -r c; do cd "$c"; find -L . -maxdepth 1 -type f | parallel rsync --avR {} /DST; done
As the above allows you the possibility to run parallel copy and recursively copy the data from SRC/ to DST it also introduces a performance penalty as the loop have to read to each folder recursively and fire parallel copies only of the content of that location.
List Creation:
Create List
rsync -avR --dry-run /SRC /DST > lsit.log
Run the command:
cat list.log | parallel --will-cite -j 100 rsync -avR {} /DST/
The above is a much simpler approach, what basically do is to create a list of all files/folders recursively on /SRC and fire 100 parallel copies reading the path of the files to copy from the list. This allows the copies to be much more efficient as having less over the head.
[2] http://www.gnu.org/software/parallel/man.html#EXAMPLE:-Parallelizing-rsync
Block Volume Performance calculation
In the realm of modern computing, where data storage and retrieval speed are paramount, understanding the performance of storage solutions is crucial. One of the fundamental components of this landscape is Linux block volume performance calculation. Whether you're a system administrator, a developer, or an enthusiast, delving into the intricacies of block volume performance, including Fio-based tests, can empower you to make informed decisions about storage setups. In this blog post, we'll demystify the concepts behind Linux block volume performance calculation and explore the key factors that influence it, along with practical Fio-based tests.
Understanding Block Volumes:
Block volumes are a type of storage solution commonly used in modern IT infrastructures. They provide raw storage space that can be partitioned and formatted according to the user's needs. These volumes are often found in virtual machines, cloud instances, and even physical servers. They are characterized by their ability to handle data at the block level, meaning data is read from and written to storage in fixed-size blocks.
Factors Influencing Block Volume Performance:
Several factors play a pivotal role in determining the performance of Linux block volumes. Understanding these factors helps optimize storage systems for better efficiency and responsiveness.
1. I/O Operations Per Second (IOPS): IOPS refers to the number of input/output operations a storage device can handle in a second. It is a key metric in assessing storage responsiveness. The higher the IOPS, the faster the storage system can read from or write to the block volume.
2. Throughput: Throughput measures the amount of data that can be transferred between the storage device and the system in a given period. It's usually measured in megabytes or gigabytes per second. Throughput is a crucial metric when dealing with large data transfers.
3. Latency: Latency is the delay between initiating a data request and receiving the first byte of data. Lower latency indicates a more responsive storage system. Excessive latency can lead to delays in data-intensive operations.
4. Queue Depth: Queue depth refers to the number of I/O requests that can be in the queue to the storage device at a given time. A higher queue depth can lead to improved performance, especially in scenarios with concurrent I/O operations.
Calculating Block Volume Performance:
While calculating precise block volume performance can be intricate, here's a simplified approach:
1. IOPS Calculation: Determine the total IOPS required by considering the application's read and write demands. Divide this total by the number of block volumes to distribute the load. It's important to consider peak I/O requirements.
2. Throughput Calculation: Calculate the required throughput by estimating the data transfer needs of the application. Divide this by the number of block volumes for load distribution.
3. Latency Estimation: Latency is affected by various factors, including the speed of the storage media and the efficiency of the underlying technology. Faster media and optimized configurations lead to lower latency.
Thursday, April 4, 2019
Automatically encrypt Ephemeral volumes on AWS EC2 instances
Lets see how to automate encrypting the ephemeral volumes for Instance Store volumes on EC2 instances. I have written a user-data script which takes care of all the encryption setups.
The script will do the below to make it seamless for the customer:
- When you launch an instance or change the instance type, the script automatically detects the Ephemeral disks available to the instance type, setup encryption and make them available.
- Automatically installs the required packages for encryption if its not installed.
- The script also takes care of encryption in case of instance reboot or stop/start.
- The user-data script automatically take care of Stop/Start, Reboots and instance re-size.
- After stop/start you will lose all the data and configuration on the ephemeral volumes. This is expected, the user-data script will automatically detect this and setup encryption on the new disks and mounts it.
- When the instance is rebooted, the data persists. On a reboot we just need to decrypt the filesystem and mount it.
- The user data script will encrypt the volume and create a filesystem if its not already encrypted (These actions will be taken if you launch a fresh instance with this user-data or when you do a stop & start of your instance).
Now, if the volumes are already encrypted (this will happen on an instance reboot), the script will simply initialize the encrypted volume and mount it on the respective directory.
- The script will encrypt all the ephemeral volumes, create EXT4 filesystem and mount them under /encrypted_X (X being the last letter of the device name of the NVMe device name).
You may change the filesystem type, mount point as per your requirement, but you will need to update the logic to map volumes to correct mount point on the script.
- The encryption key/passphrase used is encrypted using AWS KMS and kept in a file in your S3 bucket. This is a one time setup and you could use the same encrypted passphrase file for your future installations. Instance need to be attached with an IAM instance profile with permission to download files from the S3 bucket.
Here are the step by step instructions:
1. Create an S3 Bucket for storing encrypted password file (or use an existing bucket in your account).
2. Create a IAM policy with the below policy document, this policy is going to be used for instance profile to allow it download the password file from s3 bucket:
Sign in to the AWS Management Console and navigate to the IAM console: https://console.aws.amazon.com/iam/home?region=us-east-1#/home
In the navigation pane, choose Policies, choose Create Policy, select Create Your Own Policy, name and describe the policy, and paste the following policy. Choose Create Policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1478729875000",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::
]
}
]
}
---> Replace
3. Create an EC2 role with the above policy.
In the IAM console, choose Roles, and then choose Create New Role.
In Step 1: Role Name, type your role name, and choose Next Step.
In Step 2: Select Role Type, choose Amazon EC2 and choose Next Step.
In Step 3: Established Trust, choose Next Step.
In Step 4: Attach Policy, choose the policy you created in Step 1
4. Create a KMS Key and add usage permission for the role you created above from the KMS console: https://console.aws.amazon.com/iam/home?region=us-east-1#/encryptionKeys/us-east-1
Step 1: Click ‘Create key’.
Step 2: Provide an Alias and Description.
Step 3: Add IAM role you created above as Key user under - ‘Define Key Usage Permissions’.
5. Create a secret password, encrypt it with KMS and store it in S3.
# aws --region us-east-1 kms encrypt --key-id 'alias/EncFSForEC2InternalStorageKey' --plaintext "ThisIs-a-SecretPassword" --query CiphertextBlob --output text | base64 --decode > LuksInternalStorageKey
--> Replace 'alias/EncFSForEC2InternalStorageKey' with the id of the KMS key you created above. The IAM user as which you run the above command should have permission to access the KMS key as well.
—> You should also replace the string “ThisIs-a-SecretPassword" with some strong passphrase.
6. Upload the encrypted key file to S3.
# aws s3 cp LuksInternalStorageKey s3://
7. Add the below userdata script to your instance at launch time. You can add/modify the user-data on a stopped instance as well.
I3, F1 instance type provides NVMe based instance store volumes and those disks are detected as /dev/nvmeXn1, X being the NVMe device number. Other instance types provides HDD/SSD based instance store volumes which gets detected as /dev/sdX. The user-data script detect the device names from EC2 meta-data in case of HDD/SSD and using nvme tool in case of NVMe based disks. I have written separate user data script for these volume types.
Use - user-data_NVMe_InstanceStore.txt for I3, F1 instances with NVMe disks.
Use - user-data_InstanceStore.txt for other instance types.
8. Once the instance is launched, you need to update the cloud-init configuration to make sure the user data runs every time the instance boots.
On Amazon Linux AMI based instances:
# sed -i 's/scripts-user/[scripts-user, always]/' /etc/cloud/cloud.cfg.d/00_defaults.cfg
On RHEL/CentOS instances:
# sed -i 's/scripts-user/[scripts-user, always]/' /etc/cloud/cloud.cfg
9. On Amazon Linux AMI instances, you might need disable crypt module on boot time to make sure the encrypted filesystems are not detected on the early boot stages:
# sed -i 's/#omit_dracutmodules+=""/omit_dracutmodules+="crypt"/' /etc/dracut.conf
# dracut --force
User Data Scripts:
-- user-data_InstanceStore.txt
#!/bin/bash-- user-data_NVMe_InstanceStore.txt
REGION=
S3_Bucket=
# Install required packages if not installed
[ $(which unzip) ] || yum -y install unzip
[ $(which aws) ] || "$(/usr/bin/curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" ; /usr/bin/unzip -o awscli-bundle.zip; ./awscli-bundle/install -b /usr/bin/aws ; rm -rf /awscli-bundle*)"
[ $(which cryptsetup) ] || yum install -y cryptsetup
# Get the encrypted password file from s3
/usr/bin/aws s3 cp s3://${S3_Bucket}/LuksInternalStorageKey .
# Decrypt and store the passphrase in a variable
LuksClearTextKey=$(/usr/bin/aws --region {REGION} kms decrypt --ciphertext-blob fileb://LuksInternalStorageKey --output text --query Plaintext | base64 --decode)
for ephemeral in $(curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/ |grep ephemeral)
do
DEV=$(curl -s http://169.254.169.254/latest/meta-data/block-device-mapping/$ephemeral |sed 's/s/\/dev\/xv/g')
DEV_1=`echo "${DEV: -1}"`
[ "$(/bin/mount | grep -i ${DEV})" ] && /bin/umount ${DEV}
TYPE=`/usr/bin/file -sL ${DEV} | awk '{print $2}'`
if [ $TYPE == "LUKS" ]
then
# Open and initialize the encryped volume
/bin/echo "$LuksClearTextKey" | /sbin/cryptsetup luksOpen ${DEV} encfs_${DEV_1}
# Check and create mount point if not exists
[ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
# Mount the filsystem
/bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
else
# Encrypt the volume sub
/bin/echo "$LuksClearTextKey" | cryptsetup -y luksFormat ${DEV}
# Open and initialize the encryped volume
/bin/echo "$LuksClearTextKey" | cryptsetup luksOpen ${DEV} encfs_${DEV_1}
# create a filesystem on the encrypted volume, mount it on the required directory
/sbin/mkfs.ext4 /dev/mapper/encfs_${DEV_1}
[ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
/bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
fi
done
# Unset the passphrase variable and remove the encrypted password file.
unset LuksInternalStorageKey
rm LuksInternalStorageKey
#!/bin/bash
#set -x
REGION=
S3_Bucket=
# Install required packages if not installed
[ $(which unzip) ] || yum install -y zip unzip
[ $(which python) ] || yum install -y python
[ $(which aws) ] || "$(/usr/bin/curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" ; /usr/bin/unzip -o awscli-bundle.zip; ./awscli-bundle/install -b /usr/bin/aws ; rm -rf /awscli-bundle*)"
[ "$(which cryptsetup)" ] || yum install -y cryptsetup
[ "$(which nvme)" ] || yum install -y nvme-cli
# Get the encrypted password file from s3
/usr/bin/aws s3 cp s3://${S3_Bucket}/LuksInternalStorageKey .
# Decrypt and store the passphrase in a variable
LuksClearTextKey=$(/usr/bin/aws --region ${REGION} kms decrypt --ciphertext-blob fileb://LuksInternalStorageKey --output text --query Plaintext | base64 --decode)
for ephemeral in $(nvme list | grep dev | awk {'print $1'})
do
DEV_1=$(echo "${ephemeral:9:1}")
[ "$(/bin/mount | grep -i ${ephemeral})" ] && /bin/umount ${ephemeral}
TYPE=`/usr/bin/file -sL ${ephemeral} | awk '{print $2}'`
if [ $TYPE == "LUKS" ]
then
# Open and initialize the encryped volume
/bin/echo "$LuksClearTextKey" | /sbin/cryptsetup luksOpen ${ephemeral} encfs_${DEV_1}
# Check and create mount point if not exists
[ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
# Mount the filsystem
/bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
else
# Encrypt the volume sub
/bin/echo "$LuksClearTextKey" | cryptsetup -y luksFormat ${ephemeral}
# Open and initialize the encryped volume
/bin/echo "$LuksClearTextKey" | cryptsetup luksOpen ${ephemeral} encfs_${DEV_1}
# create a filesystem on the encrypted volume, mount it on the required directory
/sbin/mkfs.ext4 /dev/mapper/encfs_${DEV_1}
[ -d /encrypted_${DEV_1} ] || /bin/mkdir /encrypted_${DEV_1}
/bin/mount /dev/mapper/encfs_${DEV_1} /encrypted_${DEV_1}
fi
done
# Unset the passphrase variable and remove the encrypted password file.
unset LuksInternalStorageKey
rm LuksInternalStorageKey
Regards,
Jay