+++ This bug was initially created as a clone of Bug #1169236 +++ Description of problem: I am running fio latest from master branch with below configuration on gluster v3.6.1 CentOS v6.6 and it is failing. How reproducible: Always Steps to Reproduce: 1. install gluster v3.6.1 one machine 1 (Ex: 192.168.1.246) 2. install fio from https://github.com/axboe/fio on machine 2 (Ex: 192.168.1.245) 3. # fio $args --output=4k_caranred_gz.log --section=FS_cached_4k_random_reads --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio 4. # fio $args --output=4k_caranredmt_gz.log --section=FS_multi-threaded_cached_4k_random_reads --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio Actual results: fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:16m:01s] fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:12m:37s] fio: failed to lseek pre-read fileone] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:12m:33s] fio: failed to lseek pre-read fileone] [4KB/0KB/0KB /s] [1/0/0 iops] [eta 1158050440d:23h:12m:32s] fio: failed to lseek pre-read file Expected results: Fio should complete with flying numbers (bandwidth, iops, ...) :) Additional info: fsmb.fio configuration file: ---------------------------- [global] [FS_128k_streaming_writes] name=seqwrite rw=write bs=128k size=5g #end_fsync=1 loops=1 [FS_cached_4k_random_reads] name=randread rw=randread pre_read=1 norandommap bs=4k size=256m runtime=30 loops=1 [FS_multi-threaded_cached_4k_random_reads] name=randread numjobs=4 rw=randread pre_read=1 norandommap bs=4k size=256m/4 runtime=30 loops=1 Thanks. --- Additional comment from Niels de Vos on 2014-12-02 07:48:30 EST --- Hi Kiran, could you let us know if you hit any issues with fio over a fuse mount? Also, what kind of volume are you running the tests against? Thanks, Niels --- Additional comment from Kiran Patil on 2014-12-02 07:58:05 EST --- Hi Niels, I will try with fuse mount. I am using distributed volume. Thanks, Kiran.
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.