Bug 1302901 - SMB: SMB crashes with AIO enabled on reads + vers=3.0
Summary: SMB: SMB crashes with AIO enabled on reads + vers=3.0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: samba
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.2
Assignee: rjoseph
QA Contact: Ben Turner
URL:
Whiteboard:
Depends On:
Blocks: 1303995 1311578
TreeView+ depends on / blocked
 
Reported: 2016-01-28 22:47 UTC by Ben Turner
Modified: 2016-03-21 16:44 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.5-19
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1303995 1319646 (view as bug list)
Environment:
Last Closed: 2016-03-01 06:08:46 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Ben Turner 2016-01-28 22:47:00 UTC
Description of problem:

With vers=3.0 + AIO enabled the SMB server is crashing.  I am not seeing this on vers=1.0 or when I disable AIO + vers=3.0.

Version-Release number of selected component (if applicable):

samba-4.2.4-12.el6rhs.x86_64

How reproducible:

Every time with vers=3 + AIO.

Steps to Reproduce:
1.  Enable AIO
2.  Mount vers=3
3.  Run iozone seq read from a linux client

Actual results:

SMB crashes.

Expected results:

Normal operation.

Additional info:

Comment 2 Ben Turner 2016-01-28 22:48:24 UTC
BT from the crash:

Program terminated with signal 6, Aborted.
#0  0x00007f1716703625 in raise (sig=<value optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
64	  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x00007f1716703625 in raise (sig=<value optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x00007f1716704e05 in abort () at abort.c:92
#2  0x00007f1718035ef1 in dump_core () at ../source3/lib/dumpcore.c:337
#3  0x00007f1718028d20 in smb_panic_s3 (why=<value optimized out>) at ../source3/lib/util.c:811
#4  0x00007f1719e74bba in smb_panic (why=0x7f1719e81e4a "internal error") at ../lib/util/fault.c:166
#5  0x00007f1719e74dd2 in fault_report (sig=11) at ../lib/util/fault.c:83
#6  sig_fault (sig=11) at ../lib/util/fault.c:94
#7  <signal handler called>
#8  __pthread_mutex_lock (mutex=0x30) at pthread_mutex_lock.c:50
#9  0x00007f17026603e5 in glfs_lock (fs=0x0, subvol=0x7f16f00188d0) at glfs-internal.h:291
#10 priv_glfs_subvol_done (fs=0x0, subvol=0x7f16f00188d0) at glfs-resolve.c:884
#11 0x00007f170265fa9b in glfs_preadv_async_cbk (frame=0x7f16ffc404f4, cookie=0x7f16f00188d0, this=<value optimized out>, op_ret=<value optimized out>, op_errno=0, iovec=<value optimized out>, count=1, 
    stbuf=0x7f16fdc544f0, iobref=0x7f16f0113a20, xdata=0x0) at glfs-fops.c:777
#12 0x00007f16f79e317d in io_stats_readv_cbk (frame=0x7f16ffc3d948, cookie=<value optimized out>, this=<value optimized out>, op_ret=4096, op_errno=0, vector=0x7f16f0135470, count=1, buf=0x7f16fdc544f0, 
    iobref=0x7f16f0113a20, xdata=0x0) at io-stats.c:1376
#13 0x00007f1701f851d4 in default_readv_cbk (frame=0x7f16ffc3fe3c, cookie=<value optimized out>, this=<value optimized out>, op_ret=4096, op_errno=0, vector=<value optimized out>, count=1, 
    stbuf=0x7f16fdc544f0, iobref=0x7f16f0113a20, xdata=0x0) at defaults.c:1009
#14 0x00007f1701f851d4 in default_readv_cbk (frame=0x7f16ffc3e408, cookie=<value optimized out>, this=<value optimized out>, op_ret=4096, op_errno=0, vector=<value optimized out>, count=1, 
    stbuf=0x7f16fdc544f0, iobref=0x7f16f0113a20, xdata=0x0) at defaults.c:1009
#15 0x00007f16fc0a4aed in ioc_frame_unwind (frame=<value optimized out>) at page.c:891
#16 ioc_frame_return (frame=<value optimized out>) at page.c:934
#17 0x00007f16fc0a4d5c in ioc_waitq_return (waitq=<value optimized out>) at page.c:407
#18 0x00007f16fc0a6cbe in ioc_fault_cbk (frame=0x7f16ffc3d138, cookie=<value optimized out>, this=<value optimized out>, op_ret=<value optimized out>, op_errno=0, vector=<value optimized out>, count=1, 
    stbuf=0x7f16fdc54a30, iobref=0x7f16f0001bd0, xdata=0x0) at page.c:538
#19 0x00007f16fc4b5124 in ra_readv_disabled_cbk (frame=0x7f16ffc3e0ac, cookie=<value optimized out>, this=<value optimized out>, op_ret=131072, op_errno=0, vector=<value optimized out>, count=1, 
    stbuf=0x7f16fdc54a30, iobref=0x7f16f0001bd0, xdata=0x0) at read-ahead.c:461
#20 0x00007f1701f851d4 in default_readv_cbk (frame=0x7f16ffc3ecc4, cookie=<value optimized out>, this=<value optimized out>, op_ret=131072, op_errno=0, vector=<value optimized out>, count=1, 
    stbuf=0x7f16fdc54a30, iobref=0x7f16f0001bd0, xdata=0x0) at defaults.c:1009
#21 0x00007f16fc922429 in dht_readv_cbk (frame=0x7f16ffc3f0cc, cookie=<value optimized out>, this=<value optimized out>, op_ret=131072, op_errno=0, vector=0x7f16fdc54890, count=1, stbuf=0x7f16fdc54a30, 
    iobref=0x7f16f0001bd0, xdata=0x0) at dht-inode-read.c:478
#22 0x00007f16fcb56137 in afr_readv_cbk (frame=0x7f16ffc3db4c, cookie=<value optimized out>, this=<value optimized out>, op_ret=131072, op_errno=<value optimized out>, vector=<value optimized out>, count=1, 
    buf=0x7f16fdc54a30, iobref=0x7f16f0001bd0, xdata=0x0) at afr-inode-read.c:1724
#23 0x00007f16fcdd03d9 in client3_3_readv_cbk (req=<value optimized out>, iov=<value optimized out>, count=<value optimized out>, myframe=0x7f16ffc407a4) at client-rpc-fops.c:3055
#24 0x00007f17024404f5 in rpc_clnt_handle_reply (clnt=0x7f16f0101810, pollin=0x7f16f0134a30) at rpc-clnt.c:766
#25 0x00007f1702441a21 in rpc_clnt_notify (trans=<value optimized out>, mydata=0x7f16f0101840, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f16f0134a30) at rpc-clnt.c:907
#26 0x00007f170243cb68 in rpc_transport_notify (this=<value optimized out>, event=<value optimized out>, data=<value optimized out>) at rpc-transport.c:545
#27 0x00007f16fdc60bc5 in socket_event_poll_in (this=0x7f16f0111500) at socket.c:2236
#28 0x00007f16fdc627ad in socket_event_handler (fd=<value optimized out>, idx=<value optimized out>, data=0x7f16f0111500, poll_in=1, poll_out=0, poll_err=0) at socket.c:2349
#29 0x00007f1701fda1e0 in event_dispatch_epoll_handler (data=0x7f16f8000920) at event-epoll.c:575
#30 event_dispatch_epoll_worker (data=0x7f16f8000920) at event-epoll.c:678
#31 0x00007f171a094a51 in start_thread (arg=0x7f16fdc55700) at pthread_create.c:301
#32 0x00007f17167b99ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Comment 12 surabhi 2016-02-09 06:36:16 UTC
I tried the following test with latest build:

1. Iozone with read option from Linux cifs client using vers=3 , don't see crash , but LINUX client VFS CIFS errors are still there. (Single client single threaded)

******************************************
Feb  8 23:23:51 dhcp46-56 kernel: CIFS VFS: No task to wake, unknown frame received! NumMids 1
Feb  8 23:23:53 dhcp46-56 kernel: CIFS VFS: SMB response too long (313656 bytes)
Feb  8 23:23:53 dhcp46-56 kernel: CIFS VFS: Send error in read = -11
Feb  8 23:23:57 dhcp46-56 kernel: CIFS VFS: SMB response too long (131152 bytes)
Feb  8 23:24:00 dhcp46-56 kernel: CIFS VFS: SMB response too long (262224 bytes)
Feb  8 23:24:01 dhcp46-56 systemd-logind: New session 825 of user root.
Feb  8 23:24:01 dhcp46-56 systemd: Started Session 825 of user root.
Feb  8 23:24:01 dhcp46-56 systemd: Starting Session 825 of user root.
Feb  8 23:24:02 dhcp46-56 kernel: CIFS VFS: SMB response too long (262224 bytes)
Feb  8 23:24:03 dhcp46-56 kernel: CIFS VFS: SMB response too long (262224 bytes)
Feb  8 23:24:03 dhcp46-56 kernel: CIFS VFS: Send error in read = -11
Feb  8 23:24:04 dhcp46-56 kernel: CIFS VFS: SMB response too long (524368 bytes)
Feb  8 23:24:04 dhcp46-56 kernel: CIFS VFS: Send error in read = -11
Feb  8 23:24:06 dhcp46-56 kernel: CIFS VFS: SMB response too long (524368 bytes)
Feb  8 23:24:06 dhcp46-56 kernel: CIFS VFS: Send error in read = -11
Feb  8 23:24:08 dhcp46-56 kernel: CIFS VFS: SMB response too long (1048656 bytes)
*********************************************

2. Iozone with read option from windows client : enable/disable firewall , No crash seen (Single client , two clients)


The CIFS VFS errors are tracked in another BZ.

Waiting for Ben's perf test results.

Comment 13 Ben Turner 2016-02-09 21:19:17 UTC
Verified on - samba-4.2.4-12.el6rhs.x86_64 glusterfs-3.7.5-19.el6rhs.x86_64

Comment 15 errata-xmlrpc 2016-03-01 06:08:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.