Description of problem: While creating a file from the fuse mount in a loop and reading the file from the nfs client, the read returns EIO sometimes. This behaviour wasn't seen when io-cache was disabled. The op_errno value is reset in io-cache. How reproducible: Often Steps to Reproduce: 1. while true; do echo 'sdsds' > /mnt/fuse/dot; done 2. while true; do cat /mnt/nfs/dot; done Actual results: EIO is returned Additional info: gdb) bt #0 nfs3svc_read_cbk (frame=0x7fc252647008, cookie=0x19f8860, this=0x19f9a80, op_ret=0, op_errno=0, vector=0x1a07ac0, count=0, stbuf=0x7fff5e819720, iobref=0x1a08ab0) at nfs3.c:1773 #1 0x00007fc2501ca4ca in nfs_fop_readv_cbk (frame=0x7fc252647008, cookie=0x19f8860, this=0x19f9a80, op_ret=0, op_errno=0, vector=0x1a07ac0, count=0, stbuf=0x7fff5e819720, iobref=0x1a08ab0) at nfs-fops.c:1329 #2 0x00007fc250419a39 in io_stats_readv_cbk (frame=0x7fc25368e4e0, cookie=0x7fc25368e58c, this=0x19f8860, op_ret=0, op_errno=0, vector=0x1a07ac0, count=0, buf=0x7fff5e819720, iobref=0x1a08ab0) at io-stats.c:1332 #3 0x00007fc25063b66a in ioc_frame_unwind (frame=0x7fc25368e58c) at page.c:870 #4 0x00007fc25063b8f7 in ioc_frame_return (frame=0x7fc25368e58c) at page.c:912 #5 0x00007fc250639b31 in ioc_waitq_return (waitq=0x1a07960) at page.c:410 #6 0x00007fc25063a1dd in ioc_fault_cbk (frame=0x7fc2526467e4, cookie=0x7fc25368e638, this=0x19f76f0, op_ret=0, op_errno=2, vector=0x7fff5e819a00, count=1, stbuf=0x7fff5e819bd0, iobref=0x1a08790) at page.c:535 #7 0x00007fc25086495d in afr_readv_cbk (frame=0x7fc25368e638, cookie=0x0, this=0x19f65a0, op_ret=0, op_errno=2, vector=0x7fff5e819a00, count=1, buf=0x7fff5e819bd0, iobref=0x1a08790) at afr-inode-read.c:1228 #8 0x00007fc250aeb00b in client3_1_readv_cbk (req=0x7fc2497128e8, iov=0x7fc249712928, count=2, myframe=0x7fc25368e388) at client3_1-fops.c:2252 #9 0x00007fc254e2c1bf in rpc_clnt_handle_reply (clnt=0x1a047f0, pollin=0x19efda0) at rpc-clnt.c:796 #10 0x00007fc254e2c536 in rpc_clnt_notify (trans=0x1a04a50, mydata=0x1a04820, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x19efda0) at rpc-clnt.c:915 #11 0x00007fc254e28230 in rpc_transport_notify (this=0x1a04a50, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x19efda0) at rpc-transport.c:498 #12 0x00007fc24e5bf323 in socket_event_poll_in (this=0x1a04a50) at socket.c:1686 #13 0x00007fc24e5bf88c in socket_event_handler (fd=6, idx=4, data=0x1a04a50, poll_in=1, poll_out=0, poll_err=0) at socket.c:1801 #14 0x00007fc25508242c in event_dispatch_epoll_handler (event_pool=0x19eb500, events=0x1a062c0, i=0) at event.c:794 #15 0x00007fc25508263f in event_dispatch_epoll (event_pool=0x19eb500) at event.c:856 #16 0x00007fc2550829b2 in event_dispatch (event_pool=0x19eb500) at event.c:956 #17 0x0000000000407f1e in main (argc=7, argv=0x7fff5e81a128) at glusterfsd.c:1601
CHANGE: http://review.gluster.com/2894 (performance/io-cache: pass appropriate op_errno even during successful reads.) merged in master by Vijay Bellur (vijay)
Issue is fixed with 3.3.0qa26
*** Bug 768299 has been marked as a duplicate of this bug. ***
CHANGE: http://review.gluster.com/2939 (performance/io-cache: store op_errno in page.) merged in master by Vijay Bellur (vijay)