Bug 1392299 - [SAMBA-mdcache]Read hungs and leads to disconnect of samba share while creating IOs from one client & reading from another client [NEEDINFO]
Summary: [SAMBA-mdcache]Read hungs and leads to disconnect of samba share while creati...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: read-ahead
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.2.0
Assignee: Poornima G
QA Contact: Vivek Das
URL:
Whiteboard:
Depends On: 1388292 1399015 1399018 1399023 1399024
Blocks: 1351528 1351530
TreeView+ depends on / blocked
 
Reported: 2016-11-07 07:02 UTC by Vivek Das
Modified: 2017-03-23 06:16 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.4-6
Doc Type: Bug Fix
Doc Text:
In some situations, read operations were skipped by the io-cache translator, which led to a hung client mount. This has been corrected so that the client mount process works as expected for read operations.
Clone Of:
Environment:
Last Closed: 2017-03-23 06:16:28 UTC
lbailey: needinfo? (pgurusid)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Vivek Das 2016-11-07 07:02:52 UTC
Description of problem:
While 2 volumes with mdcache enabled is mounted in each of the 2 windows client, in one of the client in one of the volume you start creating huge 0KB files and from the other client you try reading out any of the 0KB files, the read hungs leading to disconnect of the mount from that windows client.

Version-Release number of selected component (if applicable):
samba-client-4.4.6-2.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-3.el7rhgs.x86_64
WIndows8
Windows10

How reproducible:
1/1

Steps to Reproduce:
1.On a 4 node existing samba-ctdb gluster cluster
2.Have 2 volumes (volume1 & volume2)  with mdcache enabled
3.On one of the windows client (WC1) on a loop start mounting volume1 create some ios and disconnect simultaneously mount volume2 and start creating 10,000 0kb files
4.On windows client 2 mount volume 1 & volume 2
5.On WC2 go to volume2 and try opening one of the 0KB file and write data to it and save it
6.WC2 volume2 hungs but WC1 volume2 share is accessible well & fine

Actual results:
WC2 volume2 hungs & leads to disconnection of volume1 & volume2 mount.

Expected results:
Should not hung & disconnect

Additional info:

Comment 2 Vivek Das 2016-11-07 09:29:52 UTC
Sosreports , samba logs available http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1392299

Comment 4 Nithya Balachandran 2016-11-21 09:20:09 UTC
From comment#3:

This bug is not reproducible i.e it works absolutely fine when we disable read-ahead for the volume.
< gluster volume set volname read-ahead off >

Looks like this is related to read-ahead, not readdir-ahead. Updating the component.

Comment 5 Poornima G 2016-11-22 11:46:35 UTC
Fix posted upstream : http://review.gluster.org/15901

Comment 6 Poornima G 2016-11-24 06:18:29 UTC
  RCA:
    ====
    In certain cases, ioc_readv() issues STACK_WIND_TAIL() instead
    of STACK_WIND(). One such case is when inode_ctx for that file
    is not present (can happen if readdirp was called, and populates
    md-cache and serves all the lookups from cache).
    
    Consider the following graph:
    ...
    io-cache (parent)
       |
    readdir-ahead
       |
    read-ahead
    ...
    
    Below is the code snippet of ioc_readv calling STACK_WIND_TAIL:
    ioc_readv()
    {
    ...
     if (!inode_ctx)
       STACK_WIND_TAIL (frame, FIRST_CHILD (frame->this),
                        FIRST_CHILD (frame->this)->fops->readv, fd,
                        size, offset, flags, xdata);
       /* Ideally, this stack_wind should wind to readdir-ahead:readv()
          but it winds to read-ahead:readv(). See below for
          explaination.
        */
    ...
    }
    
    STACK_WIND_TAIL (frame, obj, fn, ...)
    {
      frame->this = obj;
      /* for the above mentioned graph, frame->this will be readdir-ahead
       * frame->this = FIRST_CHILD (frame->this) i.e. readdir-ahead, which
       * is as expected
       */
      ...
      THIS = obj;
      /* THIS will be read-ahead instead of readdir-ahead!, as obj expands
       * to "FIRST_CHILD (frame->this)" and frame->this was pointing
       * to readdir-ahead in the previous statement.
       */
      ...
      fn (frame, obj, params);
      /* fn will call read-ahead:readv() instead of readdir-ahead:readv()!
       * as fn expands to "FIRST_CHILD (frame->this)->fops->readv" and
       * frame->this was pointing ro readdir-ahead in the first statement
       */
      ...
    }
    
    Thus, the readdir-ahead's readv() implementation will be skipped, and
    ra_readv() will be called with frame->this = "readdir-ahead" and
    this = "read-ahead". This can lead to corruption / hang / other problems.
    But in this perticular case, when 'frame->this' and 'this' passed
    to ra_readv() doesn't match, it causes ra_readv() to call ra_readv()
    again!. Thus the logic of read-ahead readv() falls apart and leads to
    hang.

Comment 7 Poornima G 2016-11-28 05:54:05 UTC
Have posted another patch for review: http://review.gluster.org/#/c/15923/

Once this is merged will backport this patch(http://review.gluster.org/#/c/15923/) to downstream.

 http://review.gluster.org/15901 which is already merged also fixes the issue, but the right way to fix would be http://review.gluster.org/#/c/15923/

Comment 10 Atin Mukherjee 2016-11-29 08:23:45 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/91496/

Comment 12 Vivek Das 2016-12-01 14:56:03 UTC
Followed steps to reproduce with glusterfs-3.8.4-6 & read-ahead: on, works fine so moving it to Verified state.

Comment 15 errata-xmlrpc 2017-03-23 06:16:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.