Description of problem: While running "gluster volume geo-replication vol0 status" command cli crashed and core dump was observed. Version-Release number of selected component (if applicable): [root@localhost core]# rpm -qa | grep glusterfs glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64 samba-glusterfs-3.6.509-169.4.el6rhs.x86_64 glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64 How reproducible: 1/1 Steps to Reproduce: gluster volume geo-replication vol0 status crashed Actual results: gluster cli crashed Expected results: Crash should not be observed Additional info: Core was generated by `gluster volume geo-replication vol0 status'. Program terminated with signal 11, Segmentation fault. #0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913 1913 for (i = 0; str[i] == pattern[i] && str[i]; i++); Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.149.el6_6.4.x86_64 libuuid-2.17.2-12.18.el6.x86_64 libxml2-2.7.6-17.el6_6.1.x86_64 ncurses-libs-5.7-3.20090208.el6.x86_64 openssl-1.0.1e-30.el6_6.4.x86_64 readline-6.0-4.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) bt #0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913 #1 0x000000000040a8bf in parse_cmdline (argc=<value optimized out>, argv=<value optimized out>, state=0x7fff22945ac0) at cli.c:415 #2 0x000000000040abb0 in main (argc=5, argv=0x7fff22945cb8) at cli.c:707 (gdb) #0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913 #1 0x000000000040a8bf in parse_cmdline (argc=<value optimized out>, argv=<value optimized out>, state=0x7fff22945ac0) at cli.c:415 #2 0x000000000040abb0 in main (argc=5, argv=0x7fff22945cb8) at cli.c:707 (gdb) p str $1 = 0x0 (gdb) q ================================================================================ [root@localhost core]# gluster v status vol0 Status of volume: vol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.47.143:/rhs/brick1/b1 49152 0 Y 1764 Brick 10.70.47.145:/rhs/brick1/b2 49152 0 Y 1977 Brick 10.70.47.150:/rhs/brick1/b3 49152 0 Y 2663 Brick 10.70.47.151:/rhs/brick1/b4 49152 0 Y 2596 Brick 10.70.47.143:/rhs/brick2/b5 49153 0 Y 1765 Brick 10.70.47.145:/rhs/brick2/b6 49153 0 Y 1988 Brick 10.70.47.150:/rhs/brick2/b7 49153 0 Y 2680 Brick 10.70.47.151:/rhs/brick2/b8 49153 0 Y 2613 Brick 10.70.47.143:/rhs/brick3/b9 49154 0 Y 1781 Brick 10.70.47.145:/rhs/brick3/10 49154 0 Y 1994 Brick 10.70.47.150:/rhs/brick3/b11 49154 0 Y 2697 Brick 10.70.47.151:/rhs/brick3/b12 49154 0 Y 2630 Snapshot Daemon on localhost 49156 0 Y 1793 NFS Server on localhost 2049 0 Y 1738 Self-heal Daemon on localhost N/A N/A Y 1749 Quota Daemon on localhost N/A N/A N N/A Snapshot Daemon on 10.70.47.150 49155 0 Y 8443 NFS Server on 10.70.47.150 2049 0 Y 8451 Self-heal Daemon on 10.70.47.150 N/A N/A Y 2969 Quota Daemon on 10.70.47.150 N/A N/A Y 8402 Snapshot Daemon on 10.70.47.151 49155 0 Y 8223 NFS Server on 10.70.47.151 2049 0 Y 8239 Self-heal Daemon on 10.70.47.151 N/A N/A Y 2717 Quota Daemon on 10.70.47.151 N/A N/A Y 8184 Snapshot Daemon on 10.70.47.145 49156 0 Y 2000 NFS Server on 10.70.47.145 2049 0 Y 1952 Self-heal Daemon on 10.70.47.145 N/A N/A Y 1965 Quota Daemon on 10.70.47.145 N/A N/A N N/A Task Status of Volume vol0 ------------------------------------------------------------------------------ There are no active volume tasks ======================================================== [root@localhost core]# gluster v info vol0 Volume Name: vol0 Type: Distributed-Replicate Volume ID: fc0f1280-821d-4990-a05a-00ccc9474b44 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.150:/rhs/brick1/b3 Brick4: 10.70.47.151:/rhs/brick1/b4 Brick5: 10.70.47.143:/rhs/brick2/b5 Brick6: 10.70.47.145:/rhs/brick2/b6 Brick7: 10.70.47.150:/rhs/brick2/b7 Brick8: 10.70.47.151:/rhs/brick2/b8 Brick9: 10.70.47.143:/rhs/brick3/b9 Brick10: 10.70.47.145:/rhs/brick3/10 Brick11: 10.70.47.150:/rhs/brick3/b11 Brick12: 10.70.47.151:/rhs/brick3/b12 Options Reconfigured: features.barrier: disable features.quota: on features.quota-deem-statfs: on features.uss: enable
REVIEW: http://review.gluster.org/10264 (geo-rep/cli : Fix geo-rep cli crash) posted (#1) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/10264 committed in master by Vijay Bellur (vbellur) ------ commit d18c68fbe1608a824bf50ffa3315d7acd5054a15 Author: Kotresh HR <khiremat> Date: Thu Apr 16 12:11:24 2015 +0530 geo-rep/cli : Fix geo-rep cli crash Fixes crash when "gluster vol geo-rep <master-vol> status" is run because of incorrect argc and index comparison. Change-Id: Id14d63d020ad9c5951b54ef50e7c140e58d9d7a6 BUG: 1212063 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/10264 Reviewed-by: Aravinda VK <avishwan> Tested-by: NetBSD Build System Reviewed-by: Avra Sengupta <asengupt> Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
*** Bug 1212062 has been marked as a duplicate of this bug. ***
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user