Issue: How to do faster copy and delete operations on EFS file systems.
Environment:
Amazon Linux
Ubuntu Server
Amazon EFS
Solution:
To optimize copy and delete operations on EFS file systems, you can use the GNU Parallel shell tool for executing jobs in parallel. By doing this you will be able to complete these tasks faster than using the normal serial method.
1.a. Install the NFS utilities and the GNU parallel package on Amazon Linux.
[ec2-user ~]$ sudo yum install nfs-utils -y
[ec2-user ~]$ sudo yum install parallel
1.b. Install the NFS utilities and the GNU parallel package on Ubuntu Server.
[ubuntu ~]$ sudo apt-get install nfs-common parallel -y
1.c. Install from source:
[ec2-user ~]$ cd /tmp; wget http://ftp.gnu.org/gnu/parallel/parallel-latest.tar.bz2
[ec2-user ~]$ tar -xvf parallel-20120122.tar.bz2; cd parallel-20170822
[ec2-user ~]$ sudo yum groupinstall 'development tools' -y
[ec2-user ~]$ make; ./configure && sudo make install
2. Create a temporary directory and mount the EFS filesystem.
[ec2-user ~]$ sudo mkdir /mnt/efs; sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-XXXXXXXX.efs.REGION.amazonaws.com:/ /mnt/efs
3. Create ten thousand small files local in your instance.
[ec2-user ~]$ mkdir /tmp/efs; for each in $(seq 1 10000); do SUFFIX=$(mktemp -u _XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX); sudo dd if=/dev/zero of=/tmp/efs/${SUFFIX} bs=64k count=1; done
4. Copy the files from your instance to the EFS file system using the parallel method.
[ec2-user ~]$ cd /tmp/efs; find -L . -maxdepth 1 -type f | sudo parallel rsync -avR {} /mnt/efs/
5. Delete the files from the EFS file system using the parallel method.
[ec2-user ~]$ cd /mnt/efs; find -L . -maxdepth 1 -type f | sudo parallel rm -rfv {}
Test:
The following output is from my tests, using an EFS file system (General Purpose) mounted on a t2.micro instance type:
1. Copy ten thousand files from my EC2 instance to my EFS file system, using the normal serial method.
[ec2-user ~]$ cd /tmp/efs; time sudo find -L . -maxdepth 1 -type f -exec rsync -avR '{}' /mnt/efs/ \;
real 20m8.947s
user 0m0.060s
sys 0m0.980s
2. Copy ten thousand files from my EC2 instance to my EFS file system, using the parallel method.
[ec2-user ~]$ cd /tmp/efs; time find -L . -maxdepth 1 -type f | sudo parallel rsync -avR {} /mnt/efs/
real 5m34.264s
user 0m8.308s
sys 0m6.904s
3. Delete ten thousand files from my EFS file system, using the normal serial method.
[ec2-user ~]$ cd /mnt/efs; time sudo find -L . -maxdepth 1 -exec rm -rfv {} \;
real 2m24.799s
user 0m0.124s
sys 0m1.240s
4. Delete ten thousand files from my EFS file system, using the parallel method.
[ec2-user ~]$ cd /mnt/efs; find -L . -maxdepth 1 -type f | sudo parallel rm -rfv {}
real 1m55.153s
user 0m7.988s
sys 0m6.972s
Recursively copy :
To add to this amazing article as most us are using this as a sample for our customers I would like to mention that the examples above will copy all files from SRC only, not recursively into /SRC. If you need recursively copy you will have two option with rsync:
Loop:
find /SRC/ -type d | while read -r c; do cd "$c"; find -L . -maxdepth 1 -type f | parallel rsync --avR {} /DST; done
As the above allows you the possibility to run parallel copy and recursively copy the data from SRC/ to DST it also introduces a performance penalty as the loop have to read to each folder recursively and fire parallel copies only of the content of that location.
List Creation:
Create List
rsync -avR --dry-run /SRC /DST > lsit.log
Run the command:
cat list.log | parallel --will-cite -j 100 rsync -avR {} /DST/
The above is a much simpler approach, what basically do is to create a list of all files/folders recursively on /SRC and fire 100 parallel copies reading the path of the files to copy from the list. This allows the copies to be much more efficient as having less over the head.
[2] http://www.gnu.org/software/parallel/man.html#EXAMPLE:-Parallelizing-rsync
No comments:
Post a Comment