Description of problem: While 2 volumes with mdcache enabled is mounted in each of the 2 windows client, in one of the client in one of the volume you start creating huge 0KB files and from the other client you try reading out any of the 0KB files, the read hungs leading to disconnect of the mount from that windows client. Version-Release number of selected component (if applicable): samba-client-4.4.6-2.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-3.el7rhgs.x86_64 WIndows8 Windows10 How reproducible: 1/1 Steps to Reproduce: 1.On a 4 node existing samba-ctdb gluster cluster 2.Have 2 volumes (volume1 & volume2) with mdcache enabled 3.On one of the windows client (WC1) on a loop start mounting volume1 create some ios and disconnect simultaneously mount volume2 and start creating 10,000 0kb files 4.On windows client 2 mount volume 1 & volume 2 5.On WC2 go to volume2 and try opening one of the 0KB file and write data to it and save it 6.WC2 volume2 hungs but WC1 volume2 share is accessible well & fine Actual results: WC2 volume2 hungs & leads to disconnection of volume1 & volume2 mount. Expected results: Should not hung & disconnect Additional info:
Sosreports , samba logs available http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1392299
From comment#3: This bug is not reproducible i.e it works absolutely fine when we disable read-ahead for the volume. < gluster volume set volname read-ahead off > Looks like this is related to read-ahead, not readdir-ahead. Updating the component.
Fix posted upstream : http://review.gluster.org/15901
RCA: ==== In certain cases, ioc_readv() issues STACK_WIND_TAIL() instead of STACK_WIND(). One such case is when inode_ctx for that file is not present (can happen if readdirp was called, and populates md-cache and serves all the lookups from cache). Consider the following graph: ... io-cache (parent) | readdir-ahead | read-ahead ... Below is the code snippet of ioc_readv calling STACK_WIND_TAIL: ioc_readv() { ... if (!inode_ctx) STACK_WIND_TAIL (frame, FIRST_CHILD (frame->this), FIRST_CHILD (frame->this)->fops->readv, fd, size, offset, flags, xdata); /* Ideally, this stack_wind should wind to readdir-ahead:readv() but it winds to read-ahead:readv(). See below for explaination. */ ... } STACK_WIND_TAIL (frame, obj, fn, ...) { frame->this = obj; /* for the above mentioned graph, frame->this will be readdir-ahead * frame->this = FIRST_CHILD (frame->this) i.e. readdir-ahead, which * is as expected */ ... THIS = obj; /* THIS will be read-ahead instead of readdir-ahead!, as obj expands * to "FIRST_CHILD (frame->this)" and frame->this was pointing * to readdir-ahead in the previous statement. */ ... fn (frame, obj, params); /* fn will call read-ahead:readv() instead of readdir-ahead:readv()! * as fn expands to "FIRST_CHILD (frame->this)->fops->readv" and * frame->this was pointing ro readdir-ahead in the first statement */ ... } Thus, the readdir-ahead's readv() implementation will be skipped, and ra_readv() will be called with frame->this = "readdir-ahead" and this = "read-ahead". This can lead to corruption / hang / other problems. But in this perticular case, when 'frame->this' and 'this' passed to ra_readv() doesn't match, it causes ra_readv() to call ra_readv() again!. Thus the logic of read-ahead readv() falls apart and leads to hang.
Have posted another patch for review: http://review.gluster.org/#/c/15923/ Once this is merged will backport this patch(http://review.gluster.org/#/c/15923/) to downstream. http://review.gluster.org/15901 which is already merged also fixes the issue, but the right way to fix would be http://review.gluster.org/#/c/15923/
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/91496/
Followed steps to reproduce with glusterfs-3.8.4-6 & read-ahead: on, works fine so moving it to Verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days