I/O engine ioengine=str
Defines how the job issues I/O to the file. The following types are defined: ("libaio/sg/net/netsplice/rdma/rados/rbd/gfapi/gfapi_async" defines engine specific options)
null
Doesn't transfer any data, just pretends to. This is mainly used to
exercise fio itself and for debugging/testing purposes.
sync
Basic read(2) or write(2) I/O. lseek(2) is used to position the
I/O location.
See 'fsync' and 'fdatasync' for syncing write I/Os.
libaio
Linux native asynchronous I/O.
Note that Linux may only support queued behavior with non-buffered I/O
(set 'direct=1' or 'buffered=0').
This engine defines engine specific options.
posixaio
POSIX asynchronous I/O using aio_read(3) and aio_write(3).
mmap
File is memory mapped with mmap(2) and data copied to/from
using memcpy(3).
splice
splice(2) is used to transfer the data and vmsplice(2) to transfer
data from user space to the kernel.
falloc
I/O engine that does regular fallocate to simulate data transfer as
fio ioengine.
DDIR_READ
does fallocate(,mode = FALLOC_FL_KEEP_SIZE,).
DDIR_WRITE
does fallocate(,mode = 0).
DDIR_TRIM
does fallocate(,mode = FALLOC_FL_KEEP_SIZE|FALLOC_FL_PUNCH_HOLE).
ftruncate
I/O engine that sends ftruncate(2) operations in response to write
(DDIR_WRITE) events. Each ftruncate issued sets the file's size to the
current block offset. 'blocksize' is ignored.
e4defrag
I/O engine that does regular EXT4_IOC_MOVE_EXT ioctls to simulate
defragment activity in request to DDIR_WRITE event.
mtd
Read, write and erase an MTD character device (e.g., /dev/mtd0).
Discards are treated as erases.
Depending on the underlying device type, the I/O may have to go in a
certain pattern, e.g., on NAND, writing sequentially to erase blocks and
discarding before overwriting.
The `trimwrite` mode works well for this constraint.
sg
SCSI generic sg v3 I/O. May either be synchronous using the SG_IO
ioctl, or if the target is an sg character device we use read(2) and
write(2) for asynchronous I/O.
Requires 'filename' option to specify either block or character devices.
This engine supports trim operations.
The sg engine includes engine specific options.
rdma
The RDMA I/O engine supports both RDMA memory semantics
(RDMA_WRITE/RDMA_READ) and channel semantics (Send/Recv) for the
InfiniBand, RoCE and iWARP protocols.
This engine defines engine specific options.
filecreate
Simply create the files and do no I/O to them. You still need to set
'filesize' so that all the accounting still occurs, but no actual I/O will be
done other than creating the file.
external
Prefix to specify loading an external I/O engine object file. Append
the engine filename, e.g. 'ioengine=external:/tmp/foo.o' to load ioengine
'foo.o' in '/tmp'. The path can be either absolute or relative.
See '`engines/skeleton_external.c' for details of writing an external I/O
engine.
psync
Basic pread(2) or pwrite(2) I/O. Default on all supported operating
systems except for Windows.
vsync
Basic readv(2) or writev(2) I/O.
Will emulate queuing by coalescing adjacent I/Os into a single submission.
pvsync
Basic preadv(2) or pwritev(2) I/O.
pvsync2
Basic preadv2(2) or pwritev2(2) I/O.
pmemblk
Read and write using filesystem DAX to a file on a filesystem
mounted with DAX on a persistent memory device through the PMDK
libpmemblk library.
dev-dax
Read and write using device DAX to a persistent memory device (e.g.,
/dev/dax0.0) through the PMDK libpmem library.
libpmem
Read and write using mmap I/O to a file on a filesystem mounted with
DAX on a persistent memory device through the PMDK libpmem library.
net
Transfer over the network to given 'host:port'. Depending on the
'protocol' used, the 'hostname', 'port', 'listen' and 'filename' options
are used to specify what sort of connection to make, while the 'protocol'
option determines which protocol will be used.
This engine defines engine specific options.
netsplice
Like 'net', but uses splice(2) and vmsplice(2) to map data and
send/receive.
This engine defines engine specific options.
cpuio
Doesn't transfer any data, but burns CPU cycles according to the
'cpuload' and 'cpuchunks' options. Setting 'cpuload=85' will cause that
job to do nothing but burn 85% of the CPU. In case of SMP machines,
use 'numjobs=<nr_of_cpu>' to get desired CPU usage, as the cpuload
only loads a single CPU at the desired rate. A job never finishes unless
there is at least one non-cpuio job.
solarisaio
Solaris native asynchronous I/O.
windowsaio
Windows native asynchronous I/O. Default on Windows.
guasi
The GUASI I/O engine is the Generic Userspace Asynchronous Syscall
Interface approach to async I/O. See
http://www.xmailserver.org/guasi-lib.html
for more info on GUASI.
rados
I/O engine supporting direct access to Ceph Reliable Autonomic
Distributed Object Store (RADOS) via librados.
This ioengine defines engine specific options.
rbd
I/O engine supporting direct access to Ceph Rados Block Devices
(RBD) via librbd without the need to use the kernel rbd driver.
This ioengine defines engine specific options.
http
I/O engine supporting GET/PUT requests over HTTP(S) with libcurl to
a WebDAV or S3 endpoint. This ioengine defines engine specific options.
This engine only supports direct IO of iodepth=1; you need to scale this
via numjobs. blocksize defines the size of the objects to be created.
TRIM is translated to object deletion.
gfapi
Using GlusterFS libgfapi sync interface to direct access to GlusterFS
volumes without having to go through FUSE.
This ioengine defines engine specific options.
gfapi_async
Using GlusterFS libgfapi async interface to direct access to GlusterFS
volumes without having to go through FUSE.
This ioengine defines engine specific options.
libhdfs
Read and write through Hadoop (HDFS). The 'filename' option is used
to specify host,port of the hdfs name-node to connect.
This engine interprets offsets a little differently. In HDFS, files once
created cannot be modified so random writes are not possible. To
imitate this the libhdfs engine expects a bunch of small files to be
created over HDFS and will randomly pick a file from them
based on the offset generated by fio backend (see the example
job file to create such files, use 'rw=write' option).
Please note, it may be necessary to set environment variables to work
with HDFS/libhdfs properly.
Each job uses its own connection to HDFS.
ime_psync
Synchronous read and write using DDN's Infinite Memory Engine (IME).
This engine is very basic and issues calls to IME whenever an IO is
queued.
ime_psyncv
Synchronous read and write using DDN's Infinite Memory Engine (IME).
This engine uses iovecs and will try to stack as much IOs as possible
(if the IOs are "contiguous" and the IO depth is not exceeded)
before issuing a call to IME.
ime_aio
Asynchronous read and write using DDN's Infinite Memory Engine (IME).
This engine will try to stack as much IOs as possible by creating
requests for IME. FIO will then decide when to commit these requests.