Testing Disk in Linux using fio

This article was posted more than 1 year ago. Please keep in mind that the information on this page may be outdated, insecure, or just plain wrong today.

I recently discovered a utility called fio that allows you to benchmark disk subsystem in Linux. Here are the results for this test.
What is fio?

fio is an I/O tool meant to be used both for benchmark and stress/hardware verification. It has support for 13 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. fio displays all sorts of I/O performance information. Fio is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OS X, OpenSolaris, AIX, HP-UX, and Windows.

Windows fio download:  http://www.bluestop.org/fio/
OS – Debian Linux “Wheezy” AMD64
RAM – 8GB
Virtualized – YES
VMware Tools – YES
Disk – 1 x 50GB Thin Provisioned
Test File – 10GB
Note:  Disk is on a LUN that is comprised of RAID5 using 6 disks @ 15kRPM – no throttling for disk/cpu is configured on VM.

 
Here are my fio test files:

[randrw]
rw=randread
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=randr-4k
bs=4k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=randread
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=randr-8k
bs=8k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=randrw
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=randrw-4k
bs=4k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=randrw
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=randrw-8k
bs=8k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=randwrite
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=randw-4k
bs=4k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=randrw
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=random-rw-direct
bs=8k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=read
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=seqr-4k
bs=4k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=read
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=seqr-8k
bs=8k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=rw
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=seqrw-4k
bs=4k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=rw
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=seqrw-8k
bs=8k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=write
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=seqw-4k
bs=4k
runtime=30
write_iops_log
write_lat_log
write_bw_log
[randrw]
rw=write
size=10G
direct=1
directory=/tmp/
numjobs=1
group_reporting
name=seqw-8k
bs=8k
runtime=30
write_iops_log
write_lat_log
write_bw_log

I used fio_generate_plot to generate gnuplot graphs.

#aix, #benchmark, #iops, #performance, #vmware