High Performance

How-to: NFS performance assessment using fio and fio-parser

With Azure NetApp Files, enterprises can migrate their workloads to the cloud without sacrificing performance, reliability and data management capabilities. A proper architecture is a prerequisite for a successful lift & shift: service level, volume quota and proximity to the compute are the key elements to guarantee the right level of performance.

To meet the target performance level and get the storage environment ready for its workloads, it is then necessary to check the performance of Azure NetApp Files volumes in terms of IOPS, bandwidth and latency, and make the necessary architecture tuning . To do so, you need to use a workload generator and define a workload profile to simulate your production environment.

In this article, we will focus on fio load generator and we will show how to set it up, run it and analyze its results. We recommend testing 8KiB random read/write and 64KiB sequential read/write as random I/O is typically performed with small operations while sequential I/O is done with the largest operation size possible as stated in the how-to guide “Azure NetApp Files: get the most of your cloud storage” of Chad Morgenstern, performance expert in NetApp.

fio will create a specified number of jobs – numbjos – and will generate a specified number of files in each job – nrfiles -, with a number of I/O units to keep in flight against each file -iodepth -. To get the list and the meaning of the parameters used in fio , you can refer to the documentation of its creator Jens Axbose: “fio – Flexible I/O tester“.

It is important to test different values of iodepth. To do so, you can use the same fio command and modify the iodepth each time, then check the values of IOPS, bandwidth and latency given in the output of each command. This may quickly turn out to be a tedious task. Moreover, the analysis of the results will be arduous.

You can then use an fio script where you can test several values of iodepth at once and get an output file for each iodepth. Checking the output files one by one is a laborious task, so the solution is to use the GitHub fio-parser script developed by Chad Morgenstern. fio-parser will extract the results of each output file and you can then copy the extract in Excel for further analysis.

You can still run an fio command if you want to do a quick check of IOPS, bandwidth and latency for a specific value of iodepth.

You can refer to “Azure NetApp Files: get the most of your cloud storage” for a deep dive on Azure NetApp Files performance.

Get ready for fio testing

The commands below apply to Ubuntu virtual machines.

Step 1 – Update the list of APT available packages, defined in /etc/apt/sources.list.

sudo apt-get update

Step 2 – Install Git, GCC and fio.

sudo apt-get install -y git gcc fio

GCC is the GNU Compiler Collection for C, C++, Objective-C, Fortran, Ada, Go, and D.

Step 3 – Clone GitHub repository fio-parser.git.

sudo git clone https://github.com/mchad1/fio-parser.git

Step 4 – Create the work directory of the fio script where the load files will be generated.

mkdir /<TBD1>/work

You can create work folder where it is suitable for you. /<TBD1> can be for example the folder where ANF volume is mounted (/mnt/<MountFolder> if the volume is mounted under /mnt).

Step 5 – Create the output directory of the fio script where the results will be stored.

mkdir /<TBD2>/output

You can create output folder where it is suitable for you. /<TBD2> can be /home for example.

Run fio testing

fio commands

Examples of fio commands are given in Microsoft article “Performance benchmark test recommendations for Azure NetApp Files“.

The screenshot below is an example of what you get with such commands.

fio script

The values set in the script below are just an example. You can obviously set the values that match your needs and simulate the behavior of your workload.

Recommended setting of iodepth, numjobs and nrfiles based on the volume quota, and their impact on the latency, are not in the scope of this article.



for i in 1 2 3 4 5 6 7 8 9 10 15 20 25 30 40 50 60 70 80 90 100; do fio --name=fiotest --directory=$dir_work --ioengine=libaio --direct=1 --numjobs=2 --nrfiles=4 --runtime=30 --group_reporting --time_based --stonewall --size=4G --ramp_time=20 --bs=64k --rw=read --iodepth=$i --fallocate=none --output=$dir_out/$(uname -n)-seqread-$i; done

for i in 1 2 3 4 5 6 7 8 9 10 15 20 25 30 40 50 60 70 80 90 100; do fio --name=fiotest --directory=$dir_work --ioengine=libaio --direct=1 --numjobs=2 --nrfiles=4 --runtime=30 --group_reporting --time_based --stonewall --size=4G --ramp_time=20 --bs=64k --rw=write --iodepth=$i --fallocate=none --output=$dir_out/$(uname -n)-seqwrite-$i; done

for i in 1 2 3 4 5 6 7 8 9 10 15 20 25 30 40 50 60 70 80 90 100; do fio --name=fiotest --directory=$dir_work --ioengine=libaio --direct=1 --numjobs=2 --nrfiles=4 --runtime=30 --group_reporting --time_based --stonewall --size=4G --ramp_time=20 --bs=8k --rw=randread --iodepth=$i --fallocate=none --output=$dir_out/$(uname -n)-randread-$i; done

for i in 1 2 3 4 5 6 7 8 9 10 15 20 25 30 40 50 60 70 80 90 100; do fio --name=fiotest --directory=$dir_work --ioengine=libaio --direct=1 --numjobs=2 --nrfiles=4 --runtime=30 --group_reporting --time_based --stonewall --size=4G --ramp_time=20 --bs=8k --rw=randwrite --iodepth=$i --fallocate=none --output=$dir_out/$(uname -n)-randwrite-$i; done

/TBD3/fio-parser/fio-parser.py -d $dir_out

/<TBD3> is the folder where fio-parser has been cloned or is located, it can be /home for example.

The fio script generates load files of which the number is “numjobs x nrfiles”.

In the screenshot below, we have 8 load files as numjobs=2 and nrfiles=4 in the script example above.

The fio script stores the results in a separate file for each workload profile and each value of iodepth. The content of each file is similar to what we have in the fio command screenshot above.

In the screenshots below, we can see the 84 output files for the 4 workloads profiles, 64KiB sequential read, 64KiB sequential, 8KiB random read, and 8KiB random write, with each of the 21 iodepth values 1, 2, 3, 4, 5, 6,7, 8, 9, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, and 100.


The fio-parser script extracts the data from the output files and displays the result of each file in a separate line. The extracted outputs are: reads, read_bw(MiB/s), read_lat(ms), writes, write_bw(MIB/s), write_lat(ms), where reads is the read IOPS and writes is the write IOPS.

In the screenshots below, you can see the 84 lines related to the results of the script example above (they are obviously listed one after the other but they were cut in 4 pieces to be pasted here).

To post-process and analyze the fio-parser result, you can paste all the lines in Excel and split them into different columns with Text to Columns wizard and comma as a delimiter. You can then split the result in separate tables, one for each workload profile for further analysis.

In the script example above, we will have 4 tables: 64KiB sequential read, 64KiB sequential, 8KiB random read, and 8KiB random write. You can see below the output for 8KiB random read and the charts of IOPS and latency versus iodepth.

Combining fio load generator and fio-parser scripts provides an easy and a comprehensive way for simulating various loads, even the extreme ones, and evaluating Azure NetApp Files performance for different workload profiles in NFS environment.

Leave a Reply