+++ This bug was initially created as a clone of Bug #1177927 +++ Description of problem: executing getfattr on fuse mount gives error " Software caused connection abort" , and running ls gives output "Transport endpoint is not connected" Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1.create 2*2 distribute replicate volume 2. Do fuse mount 2. set the volume options 'metadata-self-heal' , 'entry-self-heal' and 'data-self-heal' to value “off” 3. set self-heal-daemon off 4 Create file on mount point 5 do getfattr to the file Actual results: [root@client glusterfs]# getfattr -d -m . -e hex testfile getfattr: testfile: Software caused connection abort [root@client glusterfs]# ll ls: cannot open directory .: Transport endpoint is not connected ================================================ logs form /var/log/glusterfs/mnt-glusterfs-.log package-string: glusterfs 3.6.0.40 /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb6)[0x38b3620106] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x33f)[0x38b363ad5f] /lib64/libc.so.6[0x352d2326a0] /lib64/libc.so.6[0x352d28aab6] /lib64/libc.so.6[0x352d275710] /lib64/libc.so.6(vsscanf+0x65)[0x352d2696b5] /lib64/libc.so.6(_IO_sscanf+0x88)[0x352d263728] /usr/lib64/glusterfs/3.6.0.40/xlator/features/snapview-client.so(svc_getxattr+0xdf)[0x7fa7cddd1e8f] /usr/lib64/glusterfs/3.6.0.40/xlator/debug/io-stats.so(io_stats_getxattr+0x167)[0x7fa7cdbb3707] /usr/lib64/libglusterfs.so.0(default_getxattr+0x7b)[0x38b3625a8b] /usr/lib64/glusterfs/3.6.0.40/xlator/mount/fuse.so(fuse_listxattr_resume+0x4c1)[0x7fa7d1527061] /usr/lib64/glusterfs/3.6.0.40/xlator/mount/fuse.so(+0x88a6)[0x7fa7d15218a6] /usr/lib64/glusterfs/3.6.0.40/xlator/mount/fuse.so(+0x85d6)[0x7fa7d15215d6] /usr/lib64/glusterfs/3.6.0.40/xlator/mount/fuse.so(+0x88ee)[0x7fa7d15218ee] /usr/lib64/glusterfs/3.6.0.40/xlator/mount/fuse.so(fuse_resolve_continue+0x41)[0x7fa7d1521971] /usr/lib64/glusterfs/3.6.0.40/xlator/mount/fuse.so(fuse_resolve_gfid_cbk+0x1c1)[0x7fa7d1521c41] /usr/lib64/glusterfs/3.6.0.40/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x113)[0x7fa7cdbbde73] /usr/lib64/glusterfs/3.6.0.40/xlator/features/snapview-client.so(svc_lookup_cbk+0x218)[0x7fa7cddd3d48] /usr/lib64/glusterfs/3.6.0.40/xlator/performance/md-cache.so(mdc_lookup_cbk+0x14c)[0x7fa7cdfdf7cc] /usr/lib64/glusterfs/3.6.0.40/xlator/cluster/distribute.so(dht_discover_complete+0x173)[0x7fa7ce41aa53] /usr/lib64/glusterfs/3.6.0.40/xlator/cluster/distribute.so(dht_discover_cbk+0x273)[0x7fa7ce4226c3] /usr/lib64/glusterfs/3.6.0.40/xlator/cluster/replicate.so(afr_lookup_cbk+0x558)[0x7fa7ce6a1e08] /usr/lib64/glusterfs/3.6.0.40/xlator/protocol/client.so(client3_3_lookup_cbk+0x647)[0x7fa7ce8e1307] /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x38b320e775] /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x142)[0x38b320fc02] /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x38b320b3e8] /usr/lib64/glusterfs/3.6.0.40/rpc-transport/socket.so(+0x92ed)[0x7fa7c7df62ed] /usr/lib64/glusterfs/3.6.0.40/rpc-transport/socket.so(+0xaced)[0x7fa7c7df7ced] /usr/lib64/libglusterfs.so.0[0x38b3676be7] /usr/sbin/glusterfs(main+0x603)[0x407e93] /lib64/libc.so.6(__libc_start_main+0xfd)[0x352d21ed5d] /usr/sbin/glusterfs[0x4049a9] Expected results: getfattr on clinet should be successfull Additional info: [root@node1 b1]# gluster v info Volume Name: testvol Type: Distributed-Replicate Volume ID: a6941fe7-ecc0-4b45-91c4-73fd1e37795f Status: Started Snap Volume: no Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.150:/rhs/brick1/b3 Brick4: 10.70.47.151:/rhs/brick1/b4 Options Reconfigured: features.barrier: disable cluster.self-heal-daemon: off features.uss: on performance.open-behind: off performance.quick-read: off performance.io-cache: off performance.read-ahead: off performance.write-behind: off cluster.entry-self-heal: off cluster.data-self-heal: off cluster.metadata-self-heal: off performance.readdir-ahead: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256
REVIEW: http://review.gluster.org/9378 (features/uss: Perform NULL check on @name in svc_getxattr) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/9378 (features/uss: Perform NULL check on @name in svc_getxattr) posted (#2) for review on master by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/9378 committed in master by Vijay Bellur (vbellur) ------ commit 7e27cb2352b4f48935e85e3288a24ac03c3d1f83 Author: Krutika Dhananjay <kdhananj> Date: Fri Jan 2 12:28:12 2015 +0530 features/uss: Perform NULL check on @name in svc_getxattr LISTXATTR fop is internally converted into a GETXATTR with the "name" parameter set to NULL. In svc_getxattr(), a listxattr was causing a crash because of a NULL pointer dereference on @name. FIX: Add the necessary NULL check. Change-Id: I70024d40dc0695648c6d41b423c2665d030e1232 BUG: 1178079 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/9378 Reviewed-by: Raghavendra Bhat <raghavendra> Reviewed-by: Vijaikumar Mallikarjuna <vmallika> Reviewed-by: Sachin Pandit <spandit> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user