EBS Volumes: Getting all the Performance You Paid For
If you read part one of this post “We Doubled Database Performance and Reduced Cost by 35%” - we knew we had a bit more work to do. Part two of this blog will explain the testing that we performed.
If you read part one of this post “We Doubled Database Performance and Reduced Cost by 35%” - we knew we had a bit more work to do. Specifically, we wanted to ensure that recommending a larger EC2 instance would, in fact, result in greater IO for our Silicon Valley client. We needed to test how EBS volumes actually performed when they were:
1. attached to “undersized” EC2 instances
2. attached to differing size instances – for instance, a c4.xlarge and a c4.2xlarge
Our curiosity led to the test described below:
For our testing we selected a c4.xlarge and a c4.2xlarge as these instances had 4,000 and 8,000 PIOPS respectively. To ensure our EBS Volume would not be a limiting factor in testing, we created a 200GB volume with 10,000 IOPS. We utilized a CloudFormation template to create both EC2 instances, the attached EBS Volumes and network. Both instances were 64 bit, and were running the latest Xenial for us-west-1: ami-73531b13.
With the instances running the fun began! First, we needed to be make the EBS volumes available for use:
lsblk # To get the name of our EBS volume
mkfs –t ext4 /dev/xvdf # In order to format our drive to ext4
Secondly, we needed to setup the drives for testing:
mkdir /media/fiotest # To create a mount point
mount /dev/xvdf /media/fiotest # To mount our disk to our mount point
df -h # to test disk was connected
And lastly, we had to test. We decided on using fio for our IOPS testing, being relatively simple but having many options for testing. Once fio was installed with apt –y install fio we needed to create some instructions for fio to run and test the volumes accurately to be able to compare with AWS’s projected IOPS. Our fio test file looked as follows:
Fio provides a number of configuration options, but we found the above options to yield PIOPS measures similar to those provided by AWS and in line with our own expectations. In regards to the “ioengine” setting we found “posixaio” correlated more closely to an AWS’ EC2 instance’s projected IOPS then libaio did. The remainder of our settings were as follows: utilize both read and write (rw=readwrite), in 16K block sizes, for a total of 5G of read and write testing. Once configured, we ran fio and were good to go. We run this test on both servers and got very close to the results listed by Amazon. We discuss the results below:
From our c4.xlarge:
From our c4.2xlarge:
From the results above you can see the c4.xlarge delivered on just over the expected 4K IOPS when using a 16KB block size. The c4.2xlarge instance, with 16KB block size was close to double the number of IOPS of the c4.xlarge as well as close to the expected 8,000 IOPS.
In summary, we can confirm the following:
· The available PIOPS capability was, in our testing, always limited by the network interface of the EC2 instance – not the underlying storage.
· The provided PIOPS estimates on the following page (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) are accurate when testing using fio and proper settings.
If you are using a PIOPS volume – you should be attached to an EC2 instance capable of utilizing that many PIOPS or more.