Changes

GlusterFS Benchmark

1,134 bytes added, 07:20, 4 July 2014
/* Simulating Qemu/Libvirt re-open */
The options for fio are mostly taken from http://www.storagereview.com/fio_flexible_i_o_tester_synthetic_benchmark
 
The numbers generated by the operations below should be compared to running fio directly on the bricks which will give you an idea on how much of the native speed is achieved.
== FUSE mount ==
<source lang='bash'>
cd /var/virtualization
fio --ioengine=gfapi \
--volume=virtualization \
--iodepth=16 \
--numjobs=16 \
--runtime=60 \
--group_reporting \
--name=fio-test
</source>
 
=== Simulating Qemu/Libvirt re-open ===
 
'''WIP'''
 
There is currently a [https://bugzilla.redhat.com/show_bug.cgi?id=1061229 ressource leak in gfapi] which can be triggered by re-opening files. The following is a way on how to reproduce it using fio.
 
The ideas are:
* use threads instead of forks to make sure that ressources are not cleaned up by the OS
* use a much smaller file such that the jobs can finish
* every job is run after the previous finished (instead of in parallel)
 
So this should lead to a 4*N threads for fio instead of simply 4 in case the bug is still present.
 
<source lang='bash'>
fio --ioengine=gfapi \
--volume=virtualization \
--brick=10.1.120.11 \
--filename=fio.test \
--size=4M \
--direct=1 \
--rw=randrw \
--refill_buffers \
--norandommap \
--bs=8k \
--rwmixread=70 \
--iodepth=16 \
--numjobs=16 \
--thread --stonewall \
--runtime=60 \
--group_reporting \
--name=fio-test
</source>
Bureaucrat, administrator
425
edits