Bug 1371557 - Fio (ioengine=gfapi) FS_cached_4k_random_reads fails on gluster v3.6.1
Summary: Fio (ioengine=gfapi) FS_cached_4k_random_reads fails on gluster v3.6.1
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.7.15
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Niels de Vos
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On: 1169236
Blocks: 1371556
TreeView+ depends on / blocked
 
Reported: 2016-08-30 13:06 UTC by hari gowtham
Modified: 2017-03-08 11:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1169236
Environment:
Last Closed: 2017-03-08 11:02:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2016-08-30 13:06:49 UTC
+++ This bug was initially created as a clone of Bug #1169236 +++

Description of problem:
I am running fio latest from master branch with below configuration on gluster v3.6.1 CentOS v6.6 and it is failing.

How reproducible:
Always

Steps to Reproduce:
1. install gluster v3.6.1 one machine 1 (Ex: 192.168.1.246)

2. install fio from https://github.com/axboe/fio on machine 2 (Ex: 192.168.1.245)

3. # fio $args --output=4k_caranred_gz.log --section=FS_cached_4k_random_reads --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio

4. # fio $args --output=4k_caranredmt_gz.log --section=FS_multi-threaded_cached_4k_random_reads --ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio


Actual results:
fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:16m:01s]

fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:12m:37s]
fio: failed to lseek pre-read fileone] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050440d:23h:12m:33s]
fio: failed to lseek pre-read fileone] [4KB/0KB/0KB /s] [1/0/0 iops] [eta 1158050440d:23h:12m:32s]
fio: failed to lseek pre-read file


Expected results:
Fio should complete with flying numbers (bandwidth, iops, ...) :)

Additional info:
fsmb.fio configuration file:
----------------------------

[global]

[FS_128k_streaming_writes]
name=seqwrite
rw=write
bs=128k
size=5g
#end_fsync=1
loops=1

[FS_cached_4k_random_reads]
name=randread
rw=randread
pre_read=1
norandommap
bs=4k
size=256m
runtime=30
loops=1

[FS_multi-threaded_cached_4k_random_reads]
name=randread
numjobs=4
rw=randread
pre_read=1
norandommap
bs=4k
size=256m/4
runtime=30
loops=1

Thanks.

--- Additional comment from Niels de Vos on 2014-12-02 07:48:30 EST ---

Hi Kiran,

could you let us know if you hit any issues with fio over a fuse mount?
Also, what kind of volume are you running the tests against?

Thanks,
Niels

--- Additional comment from Kiran Patil on 2014-12-02 07:58:05 EST ---

Hi Niels,

I will try with fuse mount.

I am using distributed volume.

Thanks,
Kiran.

Comment 1 Kaushal 2017-03-08 11:02:07 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.