Bug 1234877 - Samba crashes with 3.7.4 and VFS module
Summary: Samba crashes with 3.7.4 and VFS module
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.7.5
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: rhs-smb@redhat.com
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks: 1314834
TreeView+ depends on / blocked
 
Reported: 2015-06-23 12:29 UTC by Denis Lambolez
Modified: 2018-01-26 15:03 UTC (History)
11 users (show)

Fixed In Version: 3.7.6
Doc Type: Bug Fix
Doc Text:
Cause: See following samba bug https://bugzilla.samba.org/show_bug.cgi?id=11115 Consequence: Crash of the smbd process (core dump) Fix: In package 2:4.1.17+dfsg-4ubuntu3glusterfs3.7.6wily2 Result: No more crash
Clone Of:
: 1314834 (view as bug list)
Environment:
Last Closed: 2015-11-28 23:06:45 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
Samba core dump (3.63 MB, application/x-bzip)
2015-09-29 13:44 UTC, dijuremo
no flags Details
Samba core dump for both servers (2.14 MB, application/zip)
2015-10-11 20:10 UTC, Denis Lambolez
no flags Details
Samba log for one of the client, on both servers (71.08 KB, application/zip)
2015-10-11 20:41 UTC, Denis Lambolez
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Samba Project 11115 0 None None None 2019-02-13 10:12:37 UTC

Description Denis Lambolez 2015-06-23 12:29:45 UTC
Description of problem:
After upgrading to "3.7.2-ubuntu1~vivid1" for GlusterFS and "2:4.1.13+dfsg-4ubuntu3glusterfs3.7.2vivid1" for the Samba VFS Module, I'm experiencing crash of the smbd deamon, at a high frequency rate. The system was stable with the 3.6 version.  

Actual results:
Here is the dump of the samba panic action script. It seems that there is a problem in glfs_chdir () from /usr/lib/x86_64-linux-gnu/libgfapi.so.0.
============================================================================
The Samba 'panic action' script, /usr/share/samba/panic-action,
was called for PID 3286 (/usr/sbin/smbd).

This means there was a problem with the program, such as a segfault.
Below is a backtrace for this process generated with gdb, which shows
the state of the program at the time the error occurred.  The Samba log
files may contain additional information about the problem.

If the problem persists, you are encouraged to first install the
samba-dbg package, which contains the debugging symbols for the Samba
binaries.  Then submit the provided information as a bug report to
Ubuntu by visiting this link:
https://launchpad.net/ubuntu/+source/samba/+filebug

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fc5fcf4389b in __GI___waitpid (pid=12767, stat_loc=stat_loc@entry=0x7ffcf6243f90, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#0  0x00007fc5fcf4389b in __GI___waitpid (pid=12767, stat_loc=stat_loc@entry=0x7ffcf6243f90, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#1  0x00007fc5fcebcffb in do_system (line=line@entry=0x7fc601f34d10 "/usr/share/samba/panic-action 3286") at ../sysdeps/posix/system.c:148
#2  0x00007fc5fcebd3da in __libc_system (line=line@entry=0x7fc601f34d10 "/usr/share/samba/panic-action 3286") at ../sysdeps/posix/system.c:184
#3  0x00007fc6001d7c05 in system (line=line@entry=0x7fc601f34d10 "/usr/share/samba/panic-action 3286") at pt-system.c:28
#4  0x00007fc5fe8612d1 in smb_panic_s3 (why=<optimized out>) at ../source3/lib/util.c:798
#5  0x00007fc5fffaedcf in smb_panic (why=why@entry=0x7fc5fffbb77c "internal error") at ../lib/util/fault.c:159
#6  0x00007fc5fffaefef in fault_report (sig=<optimized out>) at ../lib/util/fault.c:77
#7  sig_fault (sig=<optimized out>) at ../lib/util/fault.c:88
#8  <signal handler called>
#9  0x00007fc5eeb722dd in glfs_chdir () from /usr/lib/x86_64-linux-gnu/libgfapi.so.0
#10 0x00007fc5ffb862a3 in vfs_ChDir (conn=0x7fc601fb1830, path=0x7fc5ffc9c3f6 "/") at ../source3/smbd/vfs.c:840
#11 0x00007fc5ffb9b8fb in close_cnum (conn=0x7fc601fb1830, vuid=3164412539) at ../source3/smbd/service.c:1136
#12 0x00007fc5ffbc4b6c in smbXsrv_tcon_disconnect (tcon=0x7fc601fb06c0, vuid=3164412539) at ../source3/smbd/smbXsrv_tcon.c:977
#13 0x00007fc5ffbc4f52 in smbXsrv_tcon_disconnect_all_callback (local_rec=0x7ffcf6244a00, private_data=0x7ffcf6244ad0) at ../source3/smbd/smbXsrv_tcon.c:1058
#14 0x00007fc5fb539a03 in db_rbt_traverse_internal (db=db@entry=0x7fc601f32da0, n=0x0, f=f@entry=0x7fc5ffbc4ee0 <smbXsrv_tcon_disconnect_all_callback>, private_data=private_data@entry=0x7ffcf6244ad0, count=count@entry=0x7ffcf6244a8c, rw=rw@entry=true) at ../lib/dbwrap/dbwrap_rbt.c:401
#15 0x00007fc5fb539b3f in db_rbt_traverse (db=0x7fc601f32da0, f=0x7fc5ffbc4ee0 <smbXsrv_tcon_disconnect_all_callback>, private_data=0x7ffcf6244ad0) at ../lib/dbwrap/dbwrap_rbt.c:427
#16 0x00007fc5fb53860a in dbwrap_traverse (db=<optimized out>, f=f@entry=0x7fc5ffbc4ee0 <smbXsrv_tcon_disconnect_all_callback>, private_data=private_data@entry=0x7ffcf6244ad0, count=count@entry=0x7ffcf6244acc) at ../lib/dbwrap/dbwrap.c:353
#17 0x00007fc5ffbc3a39 in smbXsrv_tcon_disconnect_all (table=<optimized out>, vuid=<optimized out>) at ../source3/smbd/smbXsrv_tcon.c:1007
#18 0x00007fc5ffbc50ca in smb2srv_tcon_disconnect_all (session=session@entry=0x7fc601f33a20) at ../source3/smbd/smbXsrv_tcon.c:1165
#19 0x00007fc5ffbc26f7 in smbXsrv_session_logoff (session=0x7fc601f33a20) at ../source3/smbd/smbXsrv_session.c:1387
#20 0x00007fc5ffbc2aa2 in smbXsrv_session_logoff_all_callback (local_rec=0x7ffcf6244ba0, private_data=0x7ffcf6244c70) at ../source3/smbd/smbXsrv_session.c:1473
#21 0x00007fc5fb539a03 in db_rbt_traverse_internal (db=db@entry=0x7fc601f30f90, n=0x0, f=f@entry=0x7fc5ffbc2a50 <smbXsrv_session_logoff_all_callback>, private_data=private_data@entry=0x7ffcf6244c70, count=count@entry=0x7ffcf6244c2c, rw=rw@entry=true) at ../lib/dbwrap/dbwrap_rbt.c:401
#22 0x00007fc5fb539b3f in db_rbt_traverse (db=0x7fc601f30f90, f=0x7fc5ffbc2a50 <smbXsrv_session_logoff_all_callback>, private_data=0x7ffcf6244c70) at ../lib/dbwrap/dbwrap_rbt.c:427
#23 0x00007fc5fb53860a in dbwrap_traverse (db=<optimized out>, f=f@entry=0x7fc5ffbc2a50 <smbXsrv_session_logoff_all_callback>, private_data=private_data@entry=0x7ffcf6244c70, count=count@entry=0x7ffcf6244c6c) at ../lib/dbwrap/dbwrap.c:353
#24 0x00007fc5ffbc2aeb in smbXsrv_session_logoff_all (conn=conn@entry=0x7fc601f22050) at ../source3/smbd/smbXsrv_session.c:1428
#25 0x00007fc5ffbc7a26 in exit_server_common (how=how@entry=SERVER_EXIT_NORMAL, reason=0x7fc5ffc8c26f "termination signal") at ../source3/smbd/server_exit.c:138
#26 0x00007fc5ffbc7e9e in smbd_exit_server_cleanly (explanation=<optimized out>) at ../source3/smbd/server_exit.c:238
#27 0x00007fc5fe1b8b32 in exit_server_cleanly (reason=reason@entry=0x7fc5ffc8c26f "termination signal") at ../source3/lib/smbd_shim.c:113
#28 0x00007fc5ffb92ce0 in smbd_sig_term_handler (ev=<optimized out>, se=<optimized out>, signum=<optimized out>, count=<optimized out>, siginfo=<optimized out>, private_data=<optimized out>) at ../source3/smbd/process.c:903
#29 0x00007fc5fd249fcf in tevent_common_check_signal () from /usr/lib/x86_64-linux-gnu/libtevent.so.0
#30 0x00007fc5fe877674 in run_events_poll (ev=0x7fc601f1cfc0, pollrtn=-1, pfds=0x7fc601f23e60, num_pfds=3) at ../source3/lib/events.c:187
#31 0x00007fc5fe877a37 in s3_event_loop_once (ev=0x7fc601f1cfc0, location=<optimized out>) at ../source3/lib/events.c:326
#32 0x00007fc5fd2469ad in _tevent_loop_once () from /usr/lib/x86_64-linux-gnu/libtevent.so.0
#33 0x00007fc5ffb990cc in smbd_process (ev_ctx=0x7fc601f1cfc0, msg_ctx=0x7fc6001c67c0 <DEBUGLEVEL_CLASS>, sock_fd=32721024, interactive=64) at ../source3/smbd/process.c:3695
#34 0x00007fc600615100 in smbd_accept_connection (ev=0x7fc601f1cfc0, fde=<optimized out>, flags=<optimized out>, private_data=<optimized out>) at ../source3/smbd/server.c:610
#35 0x00007fc5fe8777c1 in run_events_poll (ev=0x7fc601f1cfc0, pollrtn=<optimized out>, pfds=0x7fc601f23e60, num_pfds=5) at ../source3/lib/events.c:257
#36 0x00007fc5fe877a37 in s3_event_loop_once (ev=0x7fc601f1cfc0, location=<optimized out>) at ../source3/lib/events.c:326
#37 0x00007fc5fd2469ad in _tevent_loop_once () from /usr/lib/x86_64-linux-gnu/libtevent.so.0
#38 0x00007fc600611e43 in smbd_parent_loop (parent=<optimized out>, ev_ctx=<optimized out>) at ../source3/smbd/server.c:934
#39 main (argc=32624576, argv=0x7fc601f1d7b0) at ../source3/smbd/server.c:1566
A debugging session is active.

        Inferior 1 [process 3286] will be detached.

Quit anyway? (y or n) [answered Y; input not from terminal]
============================================================================

Additional info:
The Samba server runs in cluster mode (CTDB). Shares are stored on a Gluster replicated volume and are exposed through Samba VFS module.

Comment 1 André Bauer 2015-06-23 12:36:00 UTC
The samba packages can be found here:

https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7

Comment 2 André Bauer 2015-07-29 18:01:20 UTC
Just uploaded new packages which are build against Gluster 3.7.3.

https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7

Comment 3 Denis Lambolez 2015-08-09 17:58:02 UTC
Just tested with GlusterFS 3.7.3 and the new package from André. Same behaviour. No change. Here is the dump of the samba panic action script. Problem is still within libgfapi.so.0. This time it seems that there is a problem in "glfs_resolve_at".

==========================================================================

The Samba 'panic action' script, /usr/share/samba/panic-action,
was called for PID 26859 (/usr/sbin/smbd).

This means there was a problem with the program, such as a segfault.
Below is a backtrace for this process generated with gdb, which shows
the state of the program at the time the error occurred.  The Samba log
files may contain additional information about the problem.

If the problem persists, you are encouraged to first install the
samba-dbg package, which contains the debugging symbols for the Samba
binaries.  Then submit the provided information as a bug report to
Ubuntu by visiting this link:
https://launchpad.net/ubuntu/+source/samba/+filebug

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f26a19d489b in __GI___waitpid (pid=26868, stat_loc=stat_loc@entry=0x7ffdb5695550, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#0  0x00007f26a19d489b in __GI___waitpid (pid=26868, stat_loc=stat_loc@entry=0x7ffdb5695550, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40
#1  0x00007f26a194dffb in do_system (line=line@entry=0x7f26a54cf1b0 "/usr/share/samba/panic-action 26859") at ../sysdeps/posix/system.c:148
#2  0x00007f26a194e3da in __libc_system (line=line@entry=0x7f26a54cf1b0 "/usr/share/samba/panic-action 26859") at ../sysdeps/posix/system.c:184
#3  0x00007f26a4c68c05 in system (line=line@entry=0x7f26a54cf1b0 "/usr/share/samba/panic-action 26859") at pt-system.c:28
#4  0x00007f26a32f22d1 in smb_panic_s3 (why=<optimized out>) at ../source3/lib/util.c:798
#5  0x00007f26a4a3fdcf in smb_panic (why=why@entry=0x7f26a4a4c77c "internal error") at ../lib/util/fault.c:159
#6  0x00007f26a4a3ffef in fault_report (sig=<optimized out>) at ../lib/util/fault.c:77
#7  sig_fault (sig=<optimized out>) at ../lib/util/fault.c:88
#8  <signal handler called>
#9  0x00007f26931fe2b1 in glfs_resolve_at () from /usr/lib/x86_64-linux-gnu/libgfapi.so.0
#10 0x00007f26931ff71f in ?? () from /usr/lib/x86_64-linux-gnu/libgfapi.so.0
#11 0x00007f26931ff7a1 in glfs_resolve () from /usr/lib/x86_64-linux-gnu/libgfapi.so.0
#12 0x00007f26931fc36f in glfs_chdir () from /usr/lib/x86_64-linux-gnu/libgfapi.so.0
#13 0x00007f26a46172a3 in vfs_ChDir (conn=0x7f26a54d17a0, path=0x7f26a472d3f6 "/") at ../source3/smbd/vfs.c:840
#14 0x00007f26a462c8fb in close_cnum (conn=0x7f26a54d17a0, vuid=52003) at ../source3/smbd/service.c:1136
#15 0x00007f26a4655b6c in smbXsrv_tcon_disconnect (tcon=0x7f26a54cad40, vuid=52003) at ../source3/smbd/smbXsrv_tcon.c:977
#16 0x00007f26a4655f52 in smbXsrv_tcon_disconnect_all_callback (local_rec=0x7ffdb56960c0, private_data=0x7ffdb5696190) at ../source3/smbd/smbXsrv_tcon.c:1058
#17 0x00007f269ffcaa03 in db_rbt_traverse_internal (db=db@entry=0x7f26a54c1cf0, n=0x0, f=f@entry=0x7f26a4655ee0 <smbXsrv_tcon_disconnect_all_callback>, private_data=private_data@entry=0x7ffdb5696190, count=count@entry=0x7ffdb569614c, rw=rw@entry=true) at ../lib/dbwrap/dbwrap_rbt.c:401
#18 0x00007f269ffcab3f in db_rbt_traverse (db=0x7f26a54c1cf0, f=0x7f26a4655ee0 <smbXsrv_tcon_disconnect_all_callback>, private_data=0x7ffdb5696190) at ../lib/dbwrap/dbwrap_rbt.c:427
#19 0x00007f269ffc960a in dbwrap_traverse (db=<optimized out>, f=f@entry=0x7f26a4655ee0 <smbXsrv_tcon_disconnect_all_callback>, private_data=private_data@entry=0x7ffdb5696190, count=count@entry=0x7ffdb569618c) at ../lib/dbwrap/dbwrap.c:353
#20 0x00007f26a4654a39 in smbXsrv_tcon_disconnect_all (table=<optimized out>, vuid=vuid@entry=0) at ../source3/smbd/smbXsrv_tcon.c:1007
#21 0x00007f26a4656012 in smb1srv_tcon_disconnect_all (conn=conn@entry=0x7f26a54b1d60) at ../source3/smbd/smbXsrv_tcon.c:1119
#22 0x00007f26a4658a14 in exit_server_common (how=how@entry=SERVER_EXIT_NORMAL, reason=0x7f26a4753d9f "failed to receive smb request") at ../source3/smbd/server_exit.c:127
#23 0x00007f26a4658e9e in smbd_exit_server_cleanly (explanation=<optimized out>) at ../source3/smbd/server_exit.c:238
#24 0x00007f26a2c49b32 in exit_server_cleanly (reason=<optimized out>) at ../source3/lib/smbd_shim.c:113
#25 0x00007f26a4628ccc in smbd_server_connection_read_handler (sconn=0x7f26a54bef30, fd=33) at ../source3/smbd/process.c:2433
#26 0x00007f26a33087c1 in run_events_poll (ev=0x7f26a54acfc0, pollrtn=<optimized out>, pfds=0x7f26a54b2990, num_pfds=3) at ../source3/lib/events.c:257
#27 0x00007f26a3308a37 in s3_event_loop_once (ev=0x7f26a54acfc0, location=<optimized out>) at ../source3/lib/events.c:326
#28 0x00007f26a1cd79ad in _tevent_loop_once () from /usr/lib/x86_64-linux-gnu/libtevent.so.0
#29 0x00007f26a462a0cc in smbd_process (ev_ctx=0x7f26a54acfc0, msg_ctx=0x7f26a4c577c0 <DEBUGLEVEL_CLASS>, sock_fd=-1521750224, interactive=4) at ../source3/smbd/process.c:3695
#30 0x00007f26a50a6100 in smbd_accept_connection (ev=0x7f26a54acfc0, fde=<optimized out>, flags=<optimized out>, private_data=<optimized out>) at ../source3/smbd/server.c:610
#31 0x00007f26a33087c1 in run_events_poll (ev=0x7f26a54acfc0, pollrtn=<optimized out>, pfds=0x7f26a54b2990, num_pfds=5) at ../source3/lib/events.c:257
#32 0x00007f26a3308a37 in s3_event_loop_once (ev=0x7f26a54acfc0, location=<optimized out>) at ../source3/lib/events.c:326
#33 0x00007f26a1cd79ad in _tevent_loop_once () from /usr/lib/x86_64-linux-gnu/libtevent.so.0
#34 0x00007f26a50a2e43 in smbd_parent_loop (parent=<optimized out>, ev_ctx=<optimized out>) at ../source3/smbd/server.c:934
#35 main (argc=-1521823808, argv=0x7f26a54ad7b0) at ../source3/smbd/server.c:1566
A debugging session is active.

        Inferior 1 [process 26859] will be detached.

Quit anyway? (y or n) [answered Y; input not from terminal]

Comment 4 André Bauer 2015-09-02 12:03:42 UTC
Just uploaded new packages which are build against Gluster 3.7.4.

https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7

Comment 5 Denis Lambolez 2015-09-09 21:23:23 UTC
(In reply to André Bauer from comment #4)
> Just uploaded new packages which are build against Gluster 3.7.4.
> 
> https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7

Just tested with GlusterFS 3.7.4 and the new package from André. Same behaviour. No change. The crash is still in the same area. Same dump.

Comment 6 dijuremo 2015-09-09 22:29:35 UTC
I am having the same problem, upgraded glusterfs from 3.6.x to 3.7.3 and now samba is producing core dumps.

# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)

# rpm -qa | grep glusterfs
samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-rdma-3.7.3-1.el7.x86_64
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64

Do you need me to upload any of the core dumps?

Comment 7 Anoop C S 2015-09-24 08:28:58 UTC
Can you explain the GlusterFS(volume configuration), Samba-CTDB setup and procedure followed which resulted in dumping core by smbd?

Comment 8 dijuremo 2015-09-24 12:45:41 UTC
Gluster volume:

[root@ysmha01 core]# gluster volume info export 
Volume Name: export
Type: Replicate
Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.1.7:/bricks/hdds/brick
Brick2: 10.0.1.6:/bricks/hdds/brick
Options Reconfigured:
performance.io-cache: on
performance.io-thread-count: 64
nfs.disable: on
cluster.server-quorum-type: server
performance.cache-size: 1024MB
server.allow-insecure: on
cluster.server-quorum-ratio: 51%

Samba configuration for each share has the entries:

   kernel share modes = No
   vfs objects = glusterfs
   glusterfs:loglevel = 7
   glusterfs:logfile = /var/log/samba/glusterfs-homes.log
   glusterfs:volume = export

I am currently not even using ctdb, just manually starting samba in one server.

[root@ysmha01 core]# rpm -qa | grep samba
samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64
samba-common-4.1.12-23.el7_1.x86_64
samba-4.1.12-23.el7_1.x86_64
samba-libs-4.1.12-23.el7_1.x86_64

Everything was working just fine on glusterfs 3.6.5. After upgrading to 3.7.3, samba started core dumping.

What specific information do you want?

Comment 9 Anoop C S 2015-09-29 11:33:21 UTC
Need some more info:

[1] On which platform are you working on (CentOS, RHEL)? For CentOS 7, I see that the latest samba version is 4.1.12-21.el7_1 (http://mirror.centos.org/centos/7/os/x86_64/Packages/).

[2] Have you installed glusterfs packages from download.gluster.org ?

[3] Can you please share the general workload/procedure followed which causes the core dump so as to try reproducing the situation for better debugging of the issue?

Comment 10 dijuremo 2015-09-29 13:07:50 UTC
[1]* Answer

[root@ysmha01 ~]# cat /etc/redhat-release 
CentOS Linux release 7.1.1503 (Core) 

[root@ysmha01 ~]# yum info samba-4.1.12-23.el7_1.x86_64
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.greenmountainaccess.net
 * extras: centos.den.host-engine.com
 * ovirt-3.5-epel: mirror.us.leaseweb.net
 * updates: centos.mirrors.tds.net
Installed Packages
Name        : samba
Arch        : x86_64
Version     : 4.1.12
Release     : 23.el7_1
Size        : 1.6 M
Repo        : installed
From repo   : updates
Summary     : Server and Client software to interoperate with Windows machines
URL         : http://www.samba.org/
License     : GPLv3+ and LGPLv3+
Description : Samba is the standard Windows interoperability suite of programs for Linux and Unix.

The updates repo section shows:

[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

You are looking at CentOS 7.0 instead of 7.1 which has samba- 4.1.12-23.el7_1.x86_64:

http://mirror.centos.org/centos/7.1.1503/os/x86_64/Packages/samba-4.1.12-21.el7_1.x86_64.rpm


[2]* Answer
I have the gluster packages from:

[root@ysmha01 ~]# yum info glusterfs
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.greenmountainaccess.net
 * extras: centos.den.host-engine.com
 * ovirt-3.5-epel: mirror.us.leaseweb.net                                                                                                                                                   
 * updates: centos.mirrors.tds.net                                                                                                                                                          
Installed Packages                                                                                                                                                                          
Name        : glusterfs
Arch        : x86_64
Version     : 3.7.3
Release     : 1.el7
Size        : 1.6 M
Repo        : installed
From repo   : ovirt-3.5-glusterfs-epel
Summary     : Cluster File System
URL         : http://www.gluster.org/docs/index.php/GlusterFS
License     : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to several
            : petabytes. It aggregates various storage bricks over Infiniband RDMA
            : or TCP/IP interconnect into one large parallel network file
            : system. GlusterFS is one of the most sophisticated file systems in
            : terms of features and extensibility.  It borrows a powerful concept
            : called Translators from GNU Hurd kernel. Much of the code in GlusterFS
            : is in user space and easily manageable.
            : 
            : This package includes the glusterfs binary, the glusterfsd daemon and the
            : libglusterfs and glusterfs translator modules common to both GlusterFS server
            : and client framework.


[3]* Answer
I have no specific procedure to produce the core dumps, they are just produced by users accessing the server via the samba shares. All shares are currently using the samba vfs gluster feature. The core dumps started just after the upgrade of gluster 3.6.x to 3.7.x. I have not had a chance to downgrade to 3.6 nor upgrade to 3.7.4 to see if that helps at all.

Here is a list of core dumps by time since 00:00AM today, I will be happy to upload one for you if you want to run it through the debugger:

[root@ysmha01 core]# file core.6502.1443531854.dump
core.6502.1443531854.dump: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/smbd'

[root@ysmha01 core]# ls -l --sort=time
total 1040848
-rw-------. 1 root root 67354624 Sep 29 09:04 core.6502.1443531854.dump
-rw-------. 1 root root 62304256 Sep 29 08:36 core.23166.1443530215.dump
-rw-------. 1 root root 59539456 Sep 29 08:08 core.29407.1443528538.dump
-rw-------. 1 root root 57675776 Sep 29 08:01 core.26438.1443528119.dump
-rw-------. 1 root root 63483904 Sep 29 07:16 core.4331.1443525364.dump
-rw-------. 1 root root 61505536 Sep 29 06:45 core.24178.1443523500.dump
-rw-------. 1 root root 67715072 Sep 29 06:28 core.25758.1443522504.dump
-rw-------. 1 root root 67682304 Sep 29 06:22 core.23897.1443522148.dump
-rw-------. 1 root root 63135744 Sep 29 05:35 core.22984.1443519340.dump
-rw-------. 1 root root 58912768 Sep 29 05:35 core.21996.1443519312.dump
-rw-------. 1 root root 58888192 Sep 29 05:34 core.21670.1443519263.dump
-rw-------. 1 root root 58945536 Sep 29 05:33 core.21142.1443519233.dump
-rw-------. 1 root root 58986496 Sep 29 05:33 core.20819.1443519210.dump
-rw-------. 1 root root 58892288 Sep 29 05:33 core.20286.1443519190.dump
-rw-------. 1 root root 57675776 Sep 29 05:32 core.19898.1443519151.dump
-rw-------. 1 root root 58904576 Sep 29 05:32 core.19343.1443519130.dump
-rw-------. 1 root root 58966016 Sep 29 05:31 core.17579.1443519060.dump
-rw-------. 1 root root 58826752 Sep 29 05:24 core.12325.1443518662.dump
-rw-------. 1 root root 57675776 Sep 29 05:23 core.12095.1443518628.dump
-rw-------. 1 root root 58945536 Sep 29 05:23 core.11661.1443518608.dump
-rw-------. 1 root root 57671680 Sep 29 05:17 core.6417.1443518252.dump
-rw-------. 1 root root 58871808 Sep 29 05:17 core.6052.1443518228.dump
-rw-------. 1 root root 60137472 Sep 29 05:07 core.30876.1443517655.dump
-rw-------. 1 root root 57532416 Sep 29 05:00 core.16957.1443517236.dump
-rw-------. 1 root root 59256832 Sep 29 04:47 core.11112.1443516445.dump
-rw-------. 1 root root 59236352 Sep 29 04:45 core.9295.1443516325.dump
-rw-------. 1 root root 60268544 Sep 29 04:45 core.5285.1443516301.dump
-rw-------. 1 root root 59240448 Sep 29 04:44 core.7476.1443516254.dump
-rw-------. 1 root root 57696256 Sep 29 04:37 core.1972.1443515822.dump
-rw-------. 1 root root 64811008 Sep 29 04:36 core.1531.1443515807.dump
-rw-------. 1 root root 59228160 Sep 29 04:30 core.28017.1443515431.dump
-rw-------. 1 root root 59674624 Sep 29 03:26 core.11734.1443511619.dump
-rw-------. 1 root root 57556992 Sep 29 03:22 core.31346.1443511341.dump
-rw-------. 1 root root 59392000 Sep 29 03:22 core.30577.1443511324.dump
-rw-------. 1 root root 57552896 Sep 29 03:21 core.30112.1443511267.dump
-rw-------. 1 root root 59346944 Sep 29 03:20 core.29235.1443511237.dump
-rw-------. 1 root root 57556992 Sep 29 03:19 core.28504.1443511161.dump
-rw-------. 1 root root 59322368 Sep 29 03:18 core.27591.1443511134.dump
-rw-------. 1 root root 57667584 Sep 29 03:18 core.27311.1443511080.dump
-rw-------. 1 root root 59318272 Sep 29 03:17 core.26885.1443511067.dump
-rw-------. 1 root root 64516096 Sep 29 03:03 core.13713.1443510199.dump
-rw-------. 1 root root 58908672 Sep 29 03:01 core.12424.1443510111.dump
-rw-------. 1 root root 57671680 Sep 29 03:00 core.11462.1443510040.dump
-rw-------. 1 root root 58859520 Sep 29 03:00 core.10719.1443510019.dump
-rw-------. 1 root root 57671680 Sep 29 02:55 core.6832.1443509729.dump
-rw-------. 1 root root 58888192 Sep 29 02:55 core.6362.1443509702.dump
-rw-------. 1 root root 57671680 Sep 29 02:45 core.30494.1443509159.dump
-rw-------. 1 root root 58925056 Sep 29 02:45 core.29922.1443509137.dump
-rw-------. 1 root root 58941440 Sep 29 02:45 core.28703.1443509107.dump
-rw-------. 1 root root 58904576 Sep 29 02:43 core.27644.1443509005.dump
-rw-------. 1 root root 64491520 Sep 29 02:42 core.26854.1443508949.dump
-rw-------. 1 root root 67346432 Sep 29 01:02 core.14361.1443502958.dump
-rw-------. 1 root root 65433600 Sep 29 00:45 core.10603.1443501907.dump

Let me know what other information I can provide.

Comment 11 dijuremo 2015-09-29 13:11:31 UTC
**Correction, the latest samba package is coming from the updates section:

http://mirror.centos.org/centos/7.1.1503/updates/x86_64/Packages/samba-test-4.1.12-23.el7_1.x86_64.rpm

Comment 12 Anoop C S 2015-09-29 13:36:36 UTC
Of course, it would be good if you can upload one among the above listed core dumps and many thanks for the detailed reply.

Comment 13 dijuremo 2015-09-29 13:44:45 UTC
Created attachment 1078347 [details]
Samba core dump

Attached compressed tar file for core.6502.1443531854.dump

Comment 14 Denis Lambolez 2015-10-04 18:28:21 UTC
(In reply to Anoop C S from comment #7)
> Can you explain the GlusterFS(volume configuration), Samba-CTDB setup and
> procedure followed which resulted in dumping core by smbd?

Sorry, I was out for while, on business trips.
Here are the requested information. I've one glusterFS volume (replicate) that I use to store samba shares. I use Samba-CTDB to balance access load between the two servers. The shares are published by Samba through André's VFS object. smb.conf is shared between the two servers (the glusterFS volume is also mounted on the two servers). 
There is no specific action to crash samba. As soon as you start samba, it starts to crash and restart.  
This configuration was running perfectly well with the 3.6. 

GlusterFS volume configuration:
-------------------------------
matou@catsserver-1:~$ sudo gluster vol info smbshare

Volume Name: smbshare
Type: Replicate
Volume ID: 40bfc10d-6f7a-45cf-81ba-0e4d531da890
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: catsserver-node1:/srv/gluster/bricks/smbshare
Brick2: catsserver-node2:/srv/gluster/bricks/smbshare
Options Reconfigured:
nfs.disable: on
server.allow-insecure: on

CTDB status:
------------
matou@catsserver-1:~$ sudo ctdb status
Number of nodes:2
pnn:0 192.168.0.21     OK (THIS NODE)
pnn:1 192.168.0.22     OK
Generation:1125798900
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0

Shares configuration (same configuration for each share):
---------------------------------------------------------
[Share]
path = /share
wide links = no
writeable = yes
kernel share modes = no
vfs objects = glusterfs
glusterfs:volfile_server = localhost
glusterfs:volume = smbshare
glusterfs:logfile = /var/log/samba/glusterfs-share.log
glusterfs:loglevel = 7

CTDB & Samba & Gluster paquets:
-------------------------------
ctdb/vivid,now 2.5.4+debian0-4 amd64  [installé]
ctdb-dbg/vivid 2.5.4+debian0-4 amd64
ctdb-pcp-pmda/vivid 2.5.4+debian0-4 amd64
samba/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
samba-common/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 all  [installé]
samba-common-bin/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
samba-dbg/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
samba-dev/vivid 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64
samba-doc/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 all  [installé]
samba-dsdb-modules/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
samba-libs/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
samba-testsuite/vivid 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64
samba-vfs-modules/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
smbclient/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64  [installé]
glusterfs-client/vivid,now 3.7.4-ubuntu1~vivid1 amd64  [installé, automatique]
glusterfs-common/vivid,now 3.7.4-ubuntu1~vivid1 amd64  [installé, automatique]
glusterfs-dbg/vivid 3.7.4-ubuntu1~vivid1 amd64
glusterfs-server/vivid,now 3.7.4-ubuntu1~vivid1 amd64  [installé]

Comment 15 Anoop C S 2015-10-06 12:56:13 UTC
(In reply to Denis Lambolez from comment #14)
> (In reply to Anoop C S from comment #7)
> > Can you explain the GlusterFS(volume configuration), Samba-CTDB setup and
> > procedure followed which resulted in dumping core by smbd?
> 
> Sorry, I was out for while, on business trips.
> Here are the requested information. I've one glusterFS volume (replicate)
> that I use to store samba shares. I use Samba-CTDB to balance access load
> between the two servers. The shares are published by Samba through André's
> VFS object. smb.conf is shared between the two servers (the glusterFS volume
> is also mounted on the two servers). 
> There is no specific action to crash samba. As soon as you start samba, it
> starts to crash and restart.

Thanks for the info. I tried to reproduce with the following packages recreating your Samba-CTDB-GlusterFS setup on two Ubuntu 15.04 VMs with https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7 enabled and I couldn't see any coredumps from smbd. If possible, can you please upload one of the coredump file?

samba/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
samba-common/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 all [installed]
samba-common-bin/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
samba-dbg/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
samba-dev/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
samba-doc/vivid,now 2:4.1.13+dfsg-4ubuntu3 all [installed,upgradable to: 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1]
samba-dsdb-modules/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed,automatic]
samba-libs/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
samba-testsuite/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
samba-vfs-modules/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
smbclient/vivid,now 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.4vivid1 amd64 [installed]
ctdb/vivid,now 2.5.4+debian0-4 amd64 [installed]
ctdb-dbg/vivid,now 2.5.4+debian0-4 amd64 [installed]
ctdb-pcp-pmda/vivid,now 2.5.4+debian0-4 amd64 [installed]
glusterfs-client/now 3.7.4-ubuntu1~vivid2 amd64 [installed,local]
glusterfs-common/now 3.7.4-ubuntu1~vivid2 amd64 [installed,local]
glusterfs-dbg/now 3.7.4-ubuntu1~vivid2 amd64 [installed,local]
glusterfs-server/now 3.7.4-ubuntu1~vivid2 amd64 [installed,local]

Comment 16 Denis Lambolez 2015-10-11 20:10:44 UTC
Created attachment 1081805 [details]
Samba core dump for both servers

Comment 17 Denis Lambolez 2015-10-11 20:41:40 UTC
Created attachment 1081807 [details]
Samba log for one of the client, on both servers

Comment 18 Denis Lambolez 2015-10-11 20:48:04 UTC
Hi,

I've uploaded the core dumps from both servers. 
I have also uploaded the the samba log files for one of the client (catspc) on both servers. I see a lot of 'Conversion error: Incomplete multibyte sequence' in those log files. We may have a problem with multibyte strings and accentuated characters in file name?

Comment 19 W. Andrew Denton 2015-10-19 19:23:41 UTC
I'm also seeing the same Samba crash on CentOS7 when using VFS:

glusterfs-client-xlators-3.7.5-1.el7.x86_64
samba-4.1.12-23.el7_1.x86_64
samba-winbind-4.1.12-23.el7_1.x86_64
glusterfs-3.7.5-1.el7.x86_64
glusterfs-api-3.7.5-1.el7.x86_64
samba-libs-4.1.12-23.el7_1.x86_64
samba-common-4.1.12-23.el7_1.x86_64
samba-winbind-modules-4.1.12-23.el7_1.x86_64
samba-winbind-krb5-locator-4.1.12-23.el7_1.x86_64
samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64
glusterfs-libs-3.7.5-1.el7.x86_64
glusterfs-fuse-3.7.5-1.el7.x86_64

[data]
        path = /data
        comment = Data
        browsable = yes
        writable = yes
        kernel share modes = no
        vfs objects = glusterfs
        glusterfs:volfile_server = gluster
        glusterfs:volume = customer0
        glusterfs:logfile = /var/log/samba/gluster-customer0-data.log
        glusterfs:loglevel = 7


Volume Name: customer0
Type: Distribute
Volume ID: 041f57b4-2c88-4dc9-8e89-afe333747e5a
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.254.28:/mnt/brick0/customer0
Brick2: 192.168.254.17:/mnt/brick0/customer0
Options Reconfigured:
features.inode-quota: off
features.quota: off
performance.readdir-ahead: on

Comment 20 Denis Lambolez 2015-10-24 10:44:58 UTC
Hi,
Just tested with GlusterFS 3.7.5 and the new package from André:
- samba: 2:4.1.13+dfsg-4ubuntu3glusterfs3.7.5vivid1 
- glusterfs: 3.7.5-ubuntu1~vivid1 

Same behaviour. No change. Here is the dump of the samba panic action script. Problem is still within libgfapi.so.0.

===============================================================
[2015/10/24 12:36:10.166977,  0] ../source3/lib/util.c:785(smb_panic_s3)
  PANIC (pid 25385): internal error
[2015/10/24 12:36:10.167893,  0] ../source3/lib/util.c:896(log_stack_trace)
  BACKTRACE: 33 stack frames:
   #0 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(log_stack_trace+0x1a) [0x7f7cba9d319a]
   #1 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(smb_panic_s3+0x20) [0x7f7cba9d3280]
   #2 /usr/lib/x86_64-linux-gnu/libsamba-util.so.0(smb_panic+0x2f) [0x7f7cbc120dcf]
   #3 /usr/lib/x86_64-linux-gnu/libsamba-util.so.0(+0x1afef) [0x7f7cbc120fef]
   #4 /lib/x86_64-linux-gnu/libpthread.so.0(+0x10d10) [0x7f7cbc349d10]
   #5 /usr/lib/x86_64-linux-gnu/libgfapi.so.0(glfs_resolve_at+0x231) [0x7f7caa8df7e1]
   #6 /usr/lib/x86_64-linux-gnu/libgfapi.so.0(+0x14c4f) [0x7f7caa8e0c4f]
   #7 /usr/lib/x86_64-linux-gnu/libgfapi.so.0(glfs_resolve+0x11) [0x7f7caa8e0cd1]
   #8 /usr/lib/x86_64-linux-gnu/libgfapi.so.0(glfs_chdir+0xa4) [0x7f7caa8dd804]
   #9 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(vfs_ChDir+0x63) [0x7f7cbbcf82a3]
   #10 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(close_cnum+0x6b) [0x7f7cbbd0d8fb]
   #11 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbXsrv_tcon_disconnect+0x12c) [0x7f7cbbd36b6c]
   #12 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x141f52) [0x7f7cbbd36f52]
   #13 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(+0x4a03) [0x7f7cb76aba03]
   #14 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(+0x4b3f) [0x7f7cb76abb3f]
   #15 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(dbwrap_traverse+0xa) [0x7f7cb76aa60a]
   #16 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x140a39) [0x7f7cbbd35a39]
   #17 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smb1srv_tcon_disconnect_all+0x12) [0x7f7cbbd37012]
   #18 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x144a14) [0x7f7cbbd39a14]
   #19 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x144e9e) [0x7f7cbbd39e9e]
   #20 /usr/lib/x86_64-linux-gnu/samba/libsmbd_shim.so.0(exit_server_cleanly+0x12) [0x7f7cba32ab32]
   #21 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x114ccc) [0x7f7cbbd09ccc]
   #22 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(run_events_poll+0x171) [0x7f7cba9e97c1]
   #23 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(+0x37a37) [0x7f7cba9e9a37]
   #24 /usr/lib/x86_64-linux-gnu/libtevent.so.0(_tevent_loop_once+0x8d) [0x7f7cb93b89ad]
   #25 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbd_process+0xc7c) [0x7f7cbbd0b0cc]
   #26 /usr/sbin/smbd(+0xa100) [0x7f7cbc787100]
   #27 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(run_events_poll+0x171) [0x7f7cba9e97c1]
   #28 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(+0x37a37) [0x7f7cba9e9a37]
   #29 /usr/lib/x86_64-linux-gnu/libtevent.so.0(_tevent_loop_once+0x8d) [0x7f7cb93b89ad]
   #30 /usr/sbin/smbd(main+0x1573) [0x7f7cbc783e43]
   #31 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7f7cb900ba40]
   #32 /usr/sbin/smbd(_start+0x29) [0x7f7cbc784069]

Comment 21 Anoop C S 2015-10-28 09:09:59 UTC
(In reply to Denis Lambolez from comment #18)
> Hi,
> 
> I've uploaded the core dumps from both servers. 
> I have also uploaded the the samba log files for one of the client (catspc)
> on both servers. I see a lot of 'Conversion error: Incomplete multibyte
> sequence' in those log files. We may have a problem with multibyte strings
> and accentuated characters in file name?

'Conversion error: Incomplete multibyte sequence'

Issue regarding logs (with DEBUG level 0) being flooded with above message got fixed sometime back. Here's the commit https://github.com/samba-team/samba/commit/1c60dc5c. AFAIK, this fix has not yet back ported to Samba-4.1 stable branch, but is present in 4.2 stable branch.

Comment 22 Anoop C S 2015-11-18 11:11:52 UTC
I came across this samba upstream bug which seems to be the cause for the crash mentioned in the bug.

https://bugzilla.samba.org/show_bug.cgi?id=11115

This has been fixed with samba 4.1.18.

https://www.samba.org/samba/history/samba-4.1.18.html

Comment 23 André Bauer 2015-11-18 11:48:39 UTC
Thanks for the info. 

I will try to backport this fix for my ubuntu packages next week.

Comment 24 André Bauer 2015-11-23 19:32:54 UTC
Uploaded package for Ubuntu Wily with backported 11115 fix.

https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7

Please test :-)

Comment 25 Denis Lambolez 2015-11-28 23:06:45 UTC
OK. Tested during few days. Everything is ok. No more crash and no core dump for smbd.

Current config is:

Package: glusterfs-server
Version: 3.7.6-ubuntu1~wily1

Package: samba-vfs-modules
Version: 2:4.1.17+dfsg-4ubuntu3glusterfs3.7.6wily2 -> from André

Thanks for solving this bug. Good job!

Comment 26 Ryan Mills 2016-03-30 13:29:24 UTC
Am Also getting something similar to Comment #20


[2016/03/30 19:21:45.785746, 1] ../source3/smbd/service.c:1130(close_cnum) win7im-pc (ipv4:192.168.10.72:49168) closed connection to service vault01mel [2016/03/30 19:21:45.914724, 1] ../source3/param/loadparm.c:2936(lp_idmap_range) idmap range not specified for domain '*' [2016/03/30 19:21:45.919980, 2] ../lib/util/modules.c:191(do_smb_load_module) Module 'aio_pthread' loaded [2016/03/30 19:21:45.926874, 2] ../lib/util/modules.c:191(do_smb_load_module) Module 'glusterfs' loaded [2016/03/30 19:21:45.928579, 2] ../lib/util/modules.c:191(do_smb_load_module) Module 'recycle' loaded [2016/03/30 19:21:46.338112, 0] ../source3/modules/vfs_glusterfs.c:292(vfs_gluster_connect) gv0mel: Initialized volume from server localhost [2016/03/30 19:21:46.346566, 2] ../source3/smbd/service.c:856(make_connection_snum) win7im-pc (ipv4:192.168.10.72:49182) connect to service vault01mel initially as user root (uid=0, gid=0) (pid 254 39)
[2016/03/30 19:21:47.357320, 0] ../lib/util/fault.c:72(fault_report)

[2016/03/30 19:21:47.357409, 0] ../lib/util/fault.c:73(fault_report) INTERNAL ERROR: Signal 11 in pid 24864 (4.1.6-Ubuntu) Please read the Trouble-Shooting section of the Samba HOWTO
[2016/03/30 19:21:47.357454, 0] ../lib/util/fault.c:75(fault_report)

[2016/03/30 19:21:47.357485, 0] ../source3/lib/util.c:785(smb_panic_s3) [2016/03/30 19:21:47.357485, 0] ../source3/lib/util.c:785(smb_panic_s3) PANIC (pid 24864): internal error [2016/03/30 19:21:47.358717, 0] ../source3/lib/util.c:896(log_stack_trace) BACKTRACE: 36 stack frames:
0 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(log_stack_trace+0x1a) [0x7f4bb7fb8f3a]

1 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(smb_panic_s3+0x20) [0x7f4bb7fb9010]

2 /usr/lib/x86_64-linux-gnu/libsamba-util.so.0(smb_panic+0x2f) [0x7f4bb9512c6f]

3 /usr/lib/x86_64-linux-gnu/libsamba-util.so.0(+0x1ae86) [0x7f4bb9512e86]

4 /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7f4bb973a340]

5 /usr/lib/x86_64-linux-gnu/libgfapi.so.0(glfs_chdir+0x76) [0x7f4ba842f4a6]

6 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(vfs_ChDir+0x60) [0x7f4bb90efe00]

7 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(close_cnum+0x120) [0x7f4bb9105350]

8 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbXsrv_tcon_disconnect+0x11c) [0x7f4bb912dd3c]

9 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x141102) [0x7f4bb912e102]

10 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(+0x4983) [0x7f4bb4a80983]

11 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(+0x4abf) [0x7f4bb4a80abf]

12 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(dbwrap_traverse+0xa) [0x7f4bb4a7f5da]

13 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x13fcc9) [0x7f4bb912ccc9]

14 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smb2srv_tcon_disconnect_all+0x1a) [0x7f4bb912e27a]

15 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbXsrv_session_logoff+0x128) [0x7f4bb912ba18]

16 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x13edb2) [0x7f4bb912bdb2]

17 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(+0x4983) [0x7f4bb4a80983]

18 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(+0x4abf) [0x7f4bb4a80abf]

19 /usr/lib/x86_64-linux-gnu/samba/libdbwrap.so.0(dbwrap_traverse+0xa) [0x7f4bb4a7f5da]

20 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbXsrv_session_logoff_all+0x3b) [0x7f4bb912be0b]

21 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x143afd) [0x7f4bb9130afd]

22 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(+0x143efe) [0x7f4bb9130efe]

23 /usr/lib/x86_64-linux-gnu/samba/libsmbd_shim.so.0(exit_server_cleanly+0x12) [0x7f4bb7912b22]

24 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbd_server_connection_terminate_ex+0x20) [0x7f4bb9113e50]

25 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(run_events_poll+0x16c) [0x7f4bb7fcf09c]

26 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(+0x372f0) [0x7f4bb7fcf2f0]

27 /usr/lib/x86_64-linux-gnu/libtevent.so.0(_tevent_loop_once+0x8d) [0x7f4bb69a05ed]

28 /usr/lib/x86_64-linux-gnu/samba/libsmbd_base.so.0(smbd_process+0x9ca) [0x7f4bb910289a]

29 smbd(+0x9fa4) [0x7f4bb9b76fa4]

30 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(run_events_poll+0x16c) [0x7f4bb7fcf09c]

31 /usr/lib/x86_64-linux-gnu/libsmbconf.so.0(+0x372f0) [0x7f4bb7fcf2f0]

32 /usr/lib/x86_64-linux-gnu/libtevent.so.0(_tevent_loop_once+0x8d) [0x7f4bb69a05ed]

33 smbd(main+0x13eb) [0x7f4bb9b73b8b]

34 /lib/x8664-linux-gnu/libc.so.6(_libc_start_main+0xf5) [0x7f4bb65f9ec5]

35 smbd(+0x6f1d) [0x7f4bb9b73f1d]

[2016/03/30 19:21:47.359193, 0] ../source3/lib/util.c:797(smb_panic_s3) smb_panic(): calling panic action [/usr/share/samba/panic-action 24864] [2016/03/30 19:21:47.591790, 0] ../source3/lib/util.c:805(smb_panic_s3) smb_panic(): action returned status 0 [2016/03/30 19:21:47.591907, 0] ../source3/lib/dumpcore.c:317(dump_core) dumping core in /var/log/samba/cores/smbd


Any fix to this?

Comment 27 Anoop C S 2016-03-30 13:38:36 UTC
Hi Ryan,

Please see my Comment #22 and comments that follows. You will get a clear picture on why this BZ was closed.

Comment 28 Ryan Mills 2016-03-30 14:58:58 UTC
So we are running Gluster Version 
glusterfs 3.7.6 built on Nov  9 2015 15:17:09

and Samba Version:
Version 4.1.6-Ubuntu

So we just need to update to passed Version 4.1.18 and we shouldn't see this issue?

Ryan

Comment 29 Anoop C S 2016-03-31 10:40:59 UTC
Hi Ryan,

Yes. Update your Samba packages to a version >= 4.1.18.


Note You need to log in before you can comment on or make changes to this bug.