fio HOWTO (1) command line options

How fio works

The first step in getting fio to simulate a desired I/O workload, is writing a job file describing that specific setup.

A job file may contain any number of threads and/or files -- the typical contents of the job file is a global section defining shared parameters, and one or more job sections describing the jobs involved.

When run, fio parses this file and sets everything up as described.

If we break down a job from top to bottom, it contains the following basic parameters:

    I/O type
        Defines the I/O pattern issued to the file(s).  We may only be reading
        sequentially from this file(s), or we may be writing randomly. Or even
        mixing reads and writes, sequentially or randomly.
        Should we be doing buffered I/O, or direct/raw I/O?

    Block size
        In how large chunks are we issuing I/O? This may be a single value,
        or it may describe a range of block sizes.

    I/O size
        How much data are we going to be reading/writing.

    I/O engine
        How do we issue I/O? We could be memory mapping the file, we could be
        using regular read/write, we could be using splice, async I/O, or even
        SG (SCSI generic sg).

    I/O depth
        If the I/O engine is async, how large a queuing depth do we want to
        maintain?

    Target file/device
        How many files are we spreading the workload over.

    Threads, processes and job synchronization
        How many threads or processes should we spread this workload over.

The above are the basic parameters defined for a workload, in addition there's a multitude of parameters that modify other aspects of how this job behaves.

command line options

Any parameters following the options will be assumed to be job files, unless they match a job file parameter. Multiple job files can be listed and each job file will be regarded as a separate group. Fio will stonewall execution between each group

    --max-jobs=nr
        Set the maximum number of threads/processes to support to `nr`.
        NOTE: On Linux, it may be necessary to increase the shared-memory
        limit (`/proc/sys/kernel/shmmax`) if fio runs into errors while
        creating jobs.

    --section=name
        Only run specified section `name` in job file. Multiple sections can be 
        specified. The `--section` option allows one to combine related jobs into 
        one file.
        E.g. one job file could define light, moderate, and heavy sections. Tell
        fio to run only the "heavy" section by giving `--section=heavy`
        command line option.  One can also specify the "write" operations in one
        section and "verify" operation in another section.  The `--section` option
        only applies to job sections.  The reserved `global` section is always
        parsed and used.

    --alloc-size=kb
        Set the internal smalloc pool size to `kb` in KiB. The `--alloc-size` 
        switch allows one to use a larger pool size for smalloc.
        If running large jobs with randommap enabled, fio can run out of memory.
        Smalloc is an internal allocator for shared structures from a fixed size
        memory pool and can grow to 16 pools. The pool size defaults to 16MiB.
        NOTE: While running `.fio_smalloc.*` backing store files are visible
        in `/tmp`.

    --enghelp=[ioengine[,command]]
        List all commands defined by `ioengine`, or print help for `command`
        defined by `ioengine`.  If no `ioengine` is given, list all
        available ioengines.

    --cmdhelp=command
        Print help information for `command`. May be ``all`` for all commands.

    --readonly
        Turn on safety read-only checks, preventing writes and trims.  The
        `--readonly` option is an extra safety guard to prevent users from
        accidentally starting a write or trim workload when that is not desired.
        Fio will only modify the device under test if
            `rw=write/randwrite/rw/randrw/trim/randtrim/trimwrite` 
        is given. This safety net can be used as an extra precaution.

    --showcmd=jobfile
        Convert `jobfile` to a set of command-line options.

    --output=filename
        Write output to file `filename`.

    --output-format=format
        Set the reporting `format` to `normal`, `terse`, `json`, or `json+`.  
        Multiple formats can be selected, separated by a comma.  
        `terse` is a CSV based format.  `json+` is like `json`, except it 
        adds a full dump of the latency buckets.

    --bandwidth-log
        Generate aggregate bandwidth logs.

    --debug=type
        Enable verbose tracing `type` of various fio actions. 
        May be ``all`` for all types or individual types separated 
        by a comma (e.g. ``--debug=file,mem`` will enable file and
        memory debugging).  
        Currently, additional logging is available for:
            process
                Dump info related to processes.
            file
                Dump info related to file actions.
            io
                Dump info related to I/O queuing.
            mem
                Dump info related to memory allocations.
            blktrace
                Dump info related to blktrace setup.
            verify
                Dump info related to I/O verification.
            all
                Enable all debug options.
            random
                Dump info related to random offset generation.
            parse
                Dump info related to option matching and parsing.
            diskutil
                Dump info related to disk utilization updates.
            job:x
                Dump info only related to job number x.
            mutex
                Dump info only related to mutex up/down ops.
            profile
                Dump info related to profile extensions.
            time
                Dump info related to internal time keeping.
            net
                Dump info related to networking connections.
            rate
                Dump info related to I/O rate switching.
            compress
                Dump info related to log compress/decompress.
            ? or help
                Show available debug options.

    --minimal
        Print statistics in a terse, semicolon-delimited format.

    --append-terse
        Print statistics in selected mode AND terse, semicolon-delimited format.
        **Deprecated**, use `--output-format` instead to select multiple formats.

    --terse-version=version
        Set terse `version` output format (default 3, or 2 or 4 or 5).

    --parse-only
        Parse options only, don't start any I/O.

    --merge-blktrace-only
        Merge blktraces only, don't start any I/O.

    --cpuclock-test
        Perform test and validation of internal CPU clock.

    --crctest=[test]
        Test the speed of the built-in checksumming functions. If no argument is
        given, all of them are tested. Alternatively, a comma separated list can
        be passed, in which case the given ones are tested.

    --eta=when
        Specifies when real-time ETA estimate should be printed.  
        `when` may be `always`, `never` or `auto`. `auto` is the default, 
        it prints ETA when requested if the output is a TTY. 
        `always` disregards the output type, and prints ETA when requested. 
        `never` never prints ETA.

    --eta-interval=time
        By default, fio requests client ETA status roughly every second. With
        this option, the interval is configurable. Fio imposes a minimum
        allowed time to avoid flooding the console, less than 250 msec is
        not supported.

    --eta-newline=time
        Force a new line for every `time` period passed. When the unit is 
        omitted, the value is interpreted in seconds.

    --status-interval=time
        Force a full status dump of cumulative (from job start) values at `time`
        intervals. This option does *not* provide per-period measurements. So
        values such as bandwidth are running averages. When the time unit is 
        omitted, `time` is interpreted in seconds. 
        Note that using this option with `--output-format=json` will yield
        output that technically isn't valid json, since the output will be
        collated sets of valid json. It will need to be split into valid sets 
        of json after the run.

    --warnings-fatal
        All fio parser warnings are fatal, causing fio to exit with an
        error.

    --server=args
        Start a backend server, with `args` specifying what to listen to.
        See `Client/Server` section.

    --daemonize=pidfile
        Background a fio server, writing the pid to the given `pidfile` file.

    --client=hostname
        Instead of running the jobs locally, send and run them on the 
        given `hostname` or set of `hostname`s. See `Client/Server` section.

    --remote-config=file
        Tell fio server to load this local `file`.

    --idle-prof=option
        Report CPU idleness. `option` is one of the following:
            calibrate
                Run unit work calibration only and exit.
            system
                Show aggregate system idleness and unit work.
            percpu
                As `system` but also show per CPU idleness.

    --inflate-log=log
        Inflate and output compressed `log`.

    --trigger-file=file
        Execute trigger command when `file` exists.

    --trigger-timeout=time
        Execute trigger at this `time`.

    --trigger=command
        Set this `command` as local trigger.

    --trigger-remote=command
        Set this `command` as remote trigger.

    --aux-path=path
        Use the directory specified by `path` for generated state files 
        instead of the current working directory.

    --version
        Print version information and exit.

    --help
        Print a summary of the command line options and exit.