Bug 1541117 - sdfs: crashes if the features is enabled
Summary: sdfs: crashes if the features is enabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 4.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-01 18:18 UTC by Amar Tumballi
Modified: 2018-03-15 11:26 UTC (History)
1 user (show)

Fixed In Version: glusterfs-4.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-15 11:26:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Amar Tumballi 2018-02-01 18:18:29 UTC
Description of problem:
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f2090fa04c2 in sdfs_lookup (frame=0x7f208c0c6ee8, this=0x7f208c02d700, loc=0x7f208c0d36f8, xdata=0x7f208c0c5fe8) at ../../../../../xlators/features/sdfs/src/sdfs.c:1284

[Current thread is 1 (Thread 0x7f20946d3700 (LWP 15147))]
Missing separate debuginfos, use: dnf debuginfo-install glibc-2.23.1-7.fc24.x86_64 keyutils-libs-1.5.9-8.fc24.x86_64 krb5-libs-1.14.1-6.fc24.x86_64 libacl-2.2.52-11.fc24.x86_64 libaio-0.3.110-6.fc24.x86_64 libattr-2.4.47-16.fc24.x86_64 libcap-2.24-9.fc24.x86_64 libcom_err-1.42.13-4.fc24.x86_64 libgcc-6.3.1-1.fc24.x86_64 libselinux-2.5-3.fc24.x86_64 libuuid-2.28-2.fc24.x86_64 nss-mdns-0.10-17.fc24.x86_64 openssl-libs-1.0.2k-1.fc24.x86_64 pcre-8.38-11.fc24.x86_64 sqlite-libs-3.13.0-1.fc24.x86_64 sssd-client-1.13.4-3.fc24.x86_64 systemd-libs-229-8.fc24.x86_64 zlib-1.2.8-10.fc24.x86_64
(gdb) bt
#0  0x00007f2090fa04c2 in sdfs_lookup (frame=0x7f208c0c6ee8, this=0x7f208c02d700, loc=0x7f208c0d36f8, xdata=0x7f208c0c5fe8) at ../../../../../xlators/features/sdfs/src/sdfs.c:1284
#1  0x00007f2090d74fc4 in io_stats_lookup (frame=0x7f208c0c6a08, this=0x7f208c02f3c0, loc=0x7f208c0d36f8, xdata=0x7f208c0c5fe8) at ../../../../../xlators/debug/io-stats/src/io-stats.c:2758
#2  0x00007f20a24fff1d in default_lookup (frame=0x7f208c0c6a08, this=0x7f208c031470, loc=0x7f208c0d36f8, xdata=0x7f208c0c5fe8) at defaults.c:2714
#3  0x00007f209090db8e in server4_lookup_resume (frame=0x7f208c092038, bound_xl=0x7f208c031470) at ../../../../../xlators/protocol/server/src/server-rpc-fops_v2.c:3119
#4  0x00007f20908bd6cd in server_resolve_done (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:587
#5  0x00007f20908bd7ce in server_resolve_all (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:622
#6  0x00007f20908bd674 in server_resolve (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:571
#7  0x00007f20908bd7a5 in server_resolve_all (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:618
#8  0x00007f20908bcff1 in server_resolve_entry (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:365
#9  0x00007f20908bd5a5 in server_resolve (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:555
#10 0x00007f20908bd750 in server_resolve_all (frame=0x7f208c092038) at ../../../../../xlators/protocol/server/src/server-resolve.c:611
#11 0x00007f20908bd860 in resolve_and_resume (frame=0x7f208c092038, fn=0x7f209090d80d <server4_lookup_resume>) at ../../../../../xlators/protocol/server/src/server-resolve.c:642
#12 0x00007f20909148d0 in server4_0_lookup (req=0x7f208c0ac4f8) at ../../../../../xlators/protocol/server/src/server-rpc-fops_v2.c:5404
#13 0x00007f20a21f40df in rpcsvc_handle_rpc_call (svc=0x7f208c046bb0, trans=0x7f208c0043f0, msg=0x7f208c004950) at ../../../../rpc/rpc-lib/src/rpcsvc.c:721
#14 0x00007f20a21f4441 in rpcsvc_notify (trans=0x7f208c0043f0, mydata=0x7f208c046bb0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f208c004950) at ../../../../rpc/rpc-lib/src/rpcsvc.c:815
#15 0x00007f20a21f9d39 in rpc_transport_notify (this=0x7f208c0043f0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f208c004950) at ../../../../rpc/rpc-lib/src/rpc-transport.c:537
#16 0x00007f2096bc3d37 in socket_event_poll_in (this=0x7f208c0043f0, notify_handled=true) at ../../../../../rpc/rpc-transport/socket/src/socket.c:2462
#17 0x00007f2096bc4398 in socket_event_handler (fd=10, idx=5, gen=1, data=0x7f208c0043f0, poll_in=1, poll_out=0, poll_err=0) at ../../../../../rpc/rpc-transport/socket/src/socket.c:2618
#18 0x00007f20a24a6d0c in event_dispatch_epoll_handler (event_pool=0xe20bc0, event=0x7f20946d2ea0) at ../../../libglusterfs/src/event-epoll.c:579
#19 0x00007f20a24a6fe0 in event_dispatch_epoll_worker (data=0xe6fcc0) at ../../../libglusterfs/src/event-epoll.c:655
#20 0x00007f20a127f5ba in start_thread () from /lib64/libpthread.so.0
#21 0x00007f20a0b577cd in clone () from /lib64/libc.so.6

(gdb) p *local
Cannot access memory at address 0x0


Version-Release number of selected component (if applicable):
glusterfs-RC0

How reproducible:
100%

Steps to Reproduce:
1. Create volume, set 'features.sdfs enable' option, start volume
2. Mount volume, try creating any file.


Actual results:
Brick process crashes

Expected results:
No crash should happen.

Additional info:
It was working fine with experimental_v2 branch.

Comment 1 Worker Ant 2018-02-01 18:21:09 UTC
REVIEW: https://review.gluster.org/19445 (sdfs: crash fixes) posted (#1) for review on release-4.0 by Amar Tumballi

Comment 2 Worker Ant 2018-02-02 15:04:09 UTC
COMMIT: https://review.gluster.org/19445 committed in release-4.0 by "Amar Tumballi" <amarts> with a commit message- sdfs: crash fixes

* from the patch which got tested in experimental branch, there
  was a code cleanup involved, which missed setting of a local
  variable, which led to crash immediately after enabling the
  feature.
* added a sanity test case to validate all the fops of sdfs.

Updates: #397

Change-Id: I7e0bebfc195c344620577cb16c1afc5f4e7d2d92
BUG: 1541117
Signed-off-by: Amar Tumballi <amarts>

Comment 3 Shyamsundar 2018-03-15 11:26:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.