Bug 991050 - afr: dbench complaining (Input/Output error) on fuse mount
afr: dbench complaining (Input/Output error) on fuse mount
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Ravishankar N
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-01 09:22 EDT by Rahul Hinduja
Modified: 2016-09-17 08:13 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:14:46 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-08-01 09:22:13 EDT
Description of problem:
=======================
afr: dbench complaining (Input/Output error) on fuse mount

dbench output:
==============

[80598] read failed on handle 10127 (Input/output error)
[80599] read failed on handle 10127 (Input/output error)
[80600] read failed on handle 10127 (Input/output error)
[80601] read failed on handle 10127 (Input/output error)
[80602] read failed on handle 10127 (Input/output error)
[80603] read failed on handle 10127 (Input/output error)
[80604] read failed on handle 10127 (Input/output error)
[80605] read failed on handle 10127 (Input/output error)
[80606] read failed on handle 10127 (Input/output error)
[80607] read failed on handle 10127 (Input/output error)
[80608] read failed on handle 10127 (Input/output error)
[80609] read failed on handle 10127 (Input/output error)
[80610] read failed on handle 10127 (Input/output error)
[80611] read failed on handle 10127 (Input/output error)
[80612] read failed on handle 10127 (Input/output error)
[80613] read failed on handle 10127 (Input/output error)
[80614] read failed on handle 10127 (Input/output error)
[80615] read failed on handle 10127 (Input/output error)
[80616] read failed on handle 10127 (Input/output error)
[80617] read failed on handle 10127 (Input/output error)
[80618] read failed on handle 10127 (Input/output error)
[80619] read failed on handle 10127 (Input/output error)
[80620] read failed on handle 10127 (Input/output error)
[80621] read failed on handle 10127 (Input/output error)
[80622] read failed on handle 10127 (Input/output error)
[80623] read failed on handle 10127 (Input/output error)
[80624] read failed on handle 10127 (Input/output error)
[80625] read failed on handle 10127 (Input/output error)


log snippet:
============

[2013-08-01 06:01:10.306964] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74021670 & waitq = 0x7faa7402b910
[2013-08-01 06:01:10.307039] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698772: READ => -1 (Input/output error)
[2013-08-01 06:01:10.307139] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74021670 & waitq = 0x7faa748b6fb0
[2013-08-01 06:01:10.307177] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698774: READ => -1 (Input/output error)
[2013-08-01 06:01:10.308334] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74503490 & waitq = 0x7faa74213b20
[2013-08-01 06:01:10.308390] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698782: READ => -1 (Input/output error)
[2013-08-01 06:01:10.308511] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74006e60 & waitq = 0x7faa74014730
[2013-08-01 06:01:10.308600] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698785: READ => -1 (Input/output error)
[2013-08-01 06:01:10.351548] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa742b7dc0 & waitq = 0x7faa7402bc50
[2013-08-01 06:01:10.351618] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698948: READ => -1 (Input/output error)
[2013-08-01 06:01:10.351799] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74399c40 & waitq = 0x7faa7400c750
[2013-08-01 06:01:10.351836] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698950: READ => -1 (Input/output error)
[2013-08-01 06:01:10.351913] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74399c40 & waitq = 0x7faa74022e60
[2013-08-01 06:01:10.351947] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698951: READ => -1 (Input/output error)
[2013-08-01 06:01:10.351997] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74399c40 & waitq = 0x7faa745e7560
[2013-08-01 06:01:10.352033] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698952: READ => -1 (Input/output error)
[2013-08-01 06:01:10.352387] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7425a130 & waitq = 0x7faa74009660
[2013-08-01 06:01:10.352423] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698955: READ => -1 (Input/output error)
[2013-08-01 06:01:10.352713] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7496e9b0 & waitq = 0x7faa74965900
[2013-08-01 06:01:10.352915] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698957: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353018] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7483fc80 & waitq = 0x7faa7420d980
[2013-08-01 06:01:10.353053] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698959: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353104] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74965450 & waitq = 0x7faa741a31b0
[2013-08-01 06:01:10.353137] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698960: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353203] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7456d850 & waitq = 0x7faa74c9a990
[2013-08-01 06:01:10.353236] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698961: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353302] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7456d850 & waitq = 0x7faa74018980
[2013-08-01 06:01:10.353335] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698962: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353402] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74c39580 & waitq = 0x7faa744a8410
[2013-08-01 06:01:10.353436] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698963: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353502] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7450e800 & waitq = 0x7faa7460cda0
[2013-08-01 06:01:10.353647] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698964: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353730] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7450e800 & waitq = 0x7faa7436ad80
[2013-08-01 06:01:10.353765] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698965: READ => -1 (Input/output error)
[2013-08-01 06:01:10.353909] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74c61260 & waitq = 0x7faa747ac880
[2013-08-01 06:01:10.353945] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698967: READ => -1 (Input/output error)
[2013-08-01 06:01:10.354012] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa742efa80 & waitq = 0x7faa7402bc50
[2013-08-01 06:01:10.354046] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698968: READ => -1 (Input/output error)
[2013-08-01 06:01:10.354132] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74015d20 & waitq = 0x7faa740263c0
[2013-08-01 06:01:10.354173] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74015d20 & waitq = 0x7faa7400c750
[2013-08-01 06:01:10.354205] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698969: READ => -1 (Input/output error)
[2013-08-01 06:01:10.354258] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74c40f60 & waitq = 0x7faa745e7560
[2013-08-01 06:01:10.354290] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698970: READ => -1 (Input/output error)
[2013-08-01 06:01:10.354356] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa7400e090 & waitq = 0x7faa748b6fb0
[2013-08-01 06:01:10.354389] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698971: READ => -1 (Input/output error)
[2013-08-01 06:01:10.354456] W [page.c:991:__ioc_page_error] 0-vol-dr-io-cache: page error for page = 0x7faa74010580 & waitq = 0x7faa74965900
[2013-08-01 06:01:10.354495] W [fuse-bridge.c:2603:fuse_readv_cbk] 0-glusterfs-fuse: 2698972: READ => -1 (Input/output error)




Version-Release number of selected component (if applicable):
=============================================================

glusterfs-fuse-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.14rhs-1.el6rhs.x86_64
glusterfs-devel-3.4.0.14rhs-1.el6rhs.x86_64

Steps Carried:
==============
1. Create and start 6*2 volume
2. Mount on client (Fuse and NFS)
3. create directories f and n from fuse mount
4. cd to f from fuse mount and cd to n from nfs mount
5. Start the dbench from both fuse (f) and nfs (n) directories using

dbench -s -F -S --stat-check 10

6. While the dbench was in progress, brought down the bricks from server1 using kill -9 and killall glusterd,glusterfsd,glusterfs process on server4

7. After a minute brought back the server4 (service glusterd restart)
8. restarted the volume forcefully (gluster volume start <vol-name> force)
9. Executed the self heal command (gluster volume heal vol-name)
10. Confirmed that the heal is successful (looked into the xattrop directory, only one xattrop entry was present)
11. Killall glusterd,glusterfs,glusterfs of server2


Actual results:
===============
Immediately fuse complained about read failed, while dbench on nfs was successful. 


Expected results:
=================
dbench should be successful on fuse as well
Comment 3 Vivek Agarwal 2015-12-03 12:14:46 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.