Bug 1392761 - During sequential reads backtraces are seen leading to IO hung
Summary: During sequential reads backtraces are seen leading to IO hung
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: samba
Version: rhgs-3.2
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Raghavendra Talur
QA Contact: Karan Sandha
URL:
Whiteboard:
Depends On:
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-11-08 07:43 UTC by Karan Sandha
Modified: 2017-03-23 06:16 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.8.4-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 06:16:38 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Karan Sandha 2016-11-08 07:43:32 UTC
Description of problem:
While doing sequential reads using IOZONE.  back traces are seen in messages which leads to IO hang.

Version-Release number of selected component (if applicable):
[root@gqas005 ~]# gluster --version
glusterfs 3.8.4 built on Oct 24 2016 11:14:30
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
rhsqe-repo.lab.eng.blr.redhat.com:/var/www/html/sosreports/<bug>
How reproducible:
100% 2/2


Logs are placed at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/KaranS/SMB/


Steps to Reproduce:
1. Create 4 Servers and 4 Clients
2. Mount each client with one server using cifs
3. Now run the IOZONE form one of the client:- 
['iozone', '-+m', 'clients.ioz', '-+h', 'gqac011.sbu.lab.eng.bos.redhat.com', '-C', '-w', '-c', '-e', '-i', '1', '-+n', '-r', '64k', '-s', '8g', '-t', '16'
4. 
Actual results:
a) Back traces observed on 3/4 servers 
b) Read Operation HALTED.

 [2016/11/07 02:56:48.793393,  0] ../lib/util/fault.c:79(fault_report)
Nov  7 02:56:48 gqas005 smbd[19910]:   INTERNAL ERROR: Signal 11 in pid 19910 (4.4.6)
Nov  7 02:56:48 gqas005 smbd[19910]:   Please read the Trouble-Shooting section of the Samba HOWTO
Nov  7 02:56:48 gqas005 smbd[19910]: [2016/11/07 02:56:48.793479, 0] ../lib/util/fault.c:81(fault_report)
Nov  7 02:56:48 gqas005 smbd[19910]: ===============================================================
Nov  7 02:56:48 gqas005 smbd[19910]: [2016/11/07 02:56:48.793534, 0] ../source3/lib/util.c:791(smb_panic_s3)
Nov  7 02:56:48 gqas005 smbd[19910]:   PANIC (pid 19910): internal error
Nov  7 02:56:48 gqas005 smbd[19910]: [2016/11/07 02:56:48.795439, 0] ../source3/lib/util.c:902(log_stack_trace)
Nov  7 02:56:48 gqas005 smbd[19910]:   BACKTRACE: 17 stack frames:
Nov  7 02:56:48 gqas005 smbd[19910]:    #0 /usr/lib64/libsmbconf.so.0(log_stack_trace+0x1a) [0x7f1ed506686a]
Nov  7 02:56:48 gqas005 smbd[19910]:    #1 /usr/lib64/libsmbconf.so.0(smb_panic_s3+0x23) [0x7f1ed5066933]
Nov  7 02:56:48 gqas005 smbd[19910]:    #2 /usr/lib64/libsamba-util.so.0(smb_panic+0x3a) [0x7f1ed755b14a]
Nov  7 02:56:48 gqas005 smbd[19910]:    #3 /usr/lib64/libsamba-util.so.0(+0x25362) [0x7f1ed755b362]
Nov  7 02:56:48 gqas005 smbd[19910]:    #4 /lib64/libpthread.so.0(+0xf7e0) [0x7f1ed77ba7e0]
Nov  7 02:56:48 gqas005 smbd[19910]:    #5 /lib64/libpthread.so.0(pthread_spin_lock+0) [0x7f1ed77b7450]
Nov  7 02:56:48 gqas005 smbd[19910]:    #6 /usr/lib64/libglusterfs.so.0(fd_unref+0xdd) [0x7f1ebf7b2bbd]
Nov  7 02:56:48 gqas005 smbd[19910]:    #7 /usr/lib64/glusterfs/3.8.4/xlator/protocol/client.so(+0x14968) [0x7f1eb3152968]
Nov  7 02:56:48 gqas005 smbd[19910]:    #8 /usr/lib64/glusterfs/3.8.4/xlator/protocol/client.so(+0x2ff68) [0x7f1eb316df68]
Nov  7 02:56:48 gqas005 smbd[19910]:    #9 /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5) [0x7f1ebfc83925]
Nov  7 02:56:48 gqas005 smbd[19910]:    #10 /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1ac) [0x7f1ebfc84a8c]
Nov  7 02:56:48 gqas005 smbd[19910]:    #11 /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28) [0x7f1ebfc7fbc8]
Nov  7 02:56:48 gqas005 smbd[19910]:    #12 /usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x956d) [0x7f1eb815356d]
Nov  7 02:56:48 gqas005 smbd[19910]:    #13 /usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0xa85e) [0x7f1eb815485e]
Nov  7 02:56:48 gqas005 smbd[19910]:    #14 /usr/lib64/libglusterfs.so.0(+0x85c96) [0x7f1ebf7e9c96]
Nov  7 02:56:48 gqas005 smbd[19910]:    #15 /lib64/libpthread.so.0(+0x7aa1) [0x7f1ed77b2aa1]
Nov  7 02:56:48 gqas005 smbd[19910]:    #16 /lib64/libc.so.6(clone+0x6d) [0x7f1ed37f5aad]
Nov  7 02:56:48 gqas005 smbd[19910]: [2016/11/07 02:56:48.807068, 0] ../source3/lib/dumpcore.c:303(dump_core)
Nov  7 02:56:48 gqas005 smbd[19910]:   dumping core in /var/log/core
Nov  7 02:56:48 gqas005 smbd[19910]: 

Expected results:
Read operation should be successful 
no Traces should be observed. 

Additional info:

Comment 2 Raghavendra Talur 2016-11-14 12:54:46 UTC
This is caused by same root cause which is fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1391093 .

This is fixed in 3.8.4-4. Please re-test this with latest build.

Comment 8 errata-xmlrpc 2017-03-23 06:16:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.