GlusterFS Benchmark

From stoney cloud
Jump to: navigation, search

The following document gives some ideas on how to benchmark an installation with regard to the GlusterFS storage backend.

For most of the benchmarks, the excellent fio tool will be used. At least version 2.1.11 of fio is required for gfapi (and O_DIRECT) support.

The options for fio are mostly taken from http://www.storagereview.com/fio_flexible_i_o_tester_synthetic_benchmark

The numbers generated by the operations below should be compared to running fio directly on the bricks which will give you an idea on how much of the native speed is achieved.

FUSE mount

The following commandline creates a file fio.test with the size of 256MB, opened using O_DIRECT and simulates a random read/write load where 70% are read operations.

The options refill_buffers are supposed to make the workload more realistic in the presence of a storage backend (filesystem and below) doing de-duplication.

The test will run for 1 minute with 16 processes in parallel doing the same thing and the reporting is done for the whole group of processes.

cd /var/virtualization
fio --ioengine=libaio \
    --filename=fio.test \
    --size=256M \
    --direct=1 \
    --rw=randrw \
    --refill_buffers \
    --norandommap \
    --bs=8k \
    --rwmixread=70 \
    --iodepth=16 \
    --numjobs=16 \
    --runtime=60 \
    --group_reporting \
    --name=fio-test

gfapi

Using the same basic parameters we can use the gfapi backend of fio to simulate operation via gfapi.

fio --ioengine=gfapi \
    --volume=virtualization \
    --brick=10.1.120.11 \
    --filename=fio.test \
    --size=256M \
    --direct=1 \
    --rw=randrw \
    --refill_buffers \
    --norandommap \
    --bs=8k \
    --rwmixread=70 \
    --iodepth=16 \
    --numjobs=16 \
    --runtime=60 \
    --group_reporting \
    --name=fio-test

Simulating Qemu/Libvirt re-open

WIP

There is currently a ressource leak in gfapi which can be triggered by re-opening files. The following is a way on how to reproduce it using fio.

The ideas are:

  • use threads instead of forks to make sure that ressources are not cleaned up by the OS
  • use a much smaller file such that the jobs can finish
  • every job is run after the previous finished (instead of in parallel)

So this should lead to a 4*N threads for fio instead of simply 4 in case the bug is still present.

fio --ioengine=gfapi \
    --volume=virtualization \
    --brick=10.1.120.11 \
    --filename=fio.test \
    --size=4M \
    --direct=1 \
    --rw=randrw \
    --refill_buffers \
    --norandommap \
    --bs=8k \
    --rwmixread=70 \
    --iodepth=16 \
    --numjobs=16 \
    --thread --stonewall \
    --runtime=60 \
    --group_reporting \
    --name=fio-test