+++ This bug was initially created as a clone of Bug #1212063 +++ Description of problem: While running "gluster volume geo-replication vol0 status" command cli crashed and core dump was observed. Version-Release number of selected component (if applicable): [root@localhost core]# rpm -qa | grep glusterfs glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64 samba-glusterfs-3.6.509-169.4.el6rhs.x86_64 glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-3.7dev-0.994.gitf522001.el6.x86_64 glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64 How reproducible: 1/1 Steps to Reproduce: gluster volume geo-replication vol0 status crashed Actual results: gluster cli crashed Expected results: Crash should not be observed Additional info: Core was generated by `gluster volume geo-replication vol0 status'. Program terminated with signal 11, Segmentation fault. #0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913 1913 for (i = 0; str[i] == pattern[i] && str[i]; i++); Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.149.el6_6.4.x86_64 libuuid-2.17.2-12.18.el6.x86_64 libxml2-2.7.6-17.el6_6.1.x86_64 ncurses-libs-5.7-3.20090208.el6.x86_64 openssl-1.0.1e-30.el6_6.4.x86_64 readline-6.0-4.el6.x86_64 zlib-1.2.3-29.el6.x86_64 (gdb) bt #0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913 #1 0x000000000040a8bf in parse_cmdline (argc=<value optimized out>, argv=<value optimized out>, state=0x7fff22945ac0) at cli.c:415 #2 0x000000000040abb0 in main (argc=5, argv=0x7fff22945cb8) at cli.c:707 (gdb) #0 strtail (str=0x0, pattern=0x44314f "co") at common-utils.c:1913 #1 0x000000000040a8bf in parse_cmdline (argc=<value optimized out>, argv=<value optimized out>, state=0x7fff22945ac0) at cli.c:415 #2 0x000000000040abb0 in main (argc=5, argv=0x7fff22945cb8) at cli.c:707 (gdb) p str $1 = 0x0 (gdb) q ================================================================================ [root@localhost core]# gluster v status vol0 Status of volume: vol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.47.143:/rhs/brick1/b1 49152 0 Y 1764 Brick 10.70.47.145:/rhs/brick1/b2 49152 0 Y 1977 Brick 10.70.47.150:/rhs/brick1/b3 49152 0 Y 2663 Brick 10.70.47.151:/rhs/brick1/b4 49152 0 Y 2596 Brick 10.70.47.143:/rhs/brick2/b5 49153 0 Y 1765 Brick 10.70.47.145:/rhs/brick2/b6 49153 0 Y 1988 Brick 10.70.47.150:/rhs/brick2/b7 49153 0 Y 2680 Brick 10.70.47.151:/rhs/brick2/b8 49153 0 Y 2613 Brick 10.70.47.143:/rhs/brick3/b9 49154 0 Y 1781 Brick 10.70.47.145:/rhs/brick3/10 49154 0 Y 1994 Brick 10.70.47.150:/rhs/brick3/b11 49154 0 Y 2697 Brick 10.70.47.151:/rhs/brick3/b12 49154 0 Y 2630 Snapshot Daemon on localhost 49156 0 Y 1793 NFS Server on localhost 2049 0 Y 1738 Self-heal Daemon on localhost N/A N/A Y 1749 Quota Daemon on localhost N/A N/A N N/A Snapshot Daemon on 10.70.47.150 49155 0 Y 8443 NFS Server on 10.70.47.150 2049 0 Y 8451 Self-heal Daemon on 10.70.47.150 N/A N/A Y 2969 Quota Daemon on 10.70.47.150 N/A N/A Y 8402 Snapshot Daemon on 10.70.47.151 49155 0 Y 8223 NFS Server on 10.70.47.151 2049 0 Y 8239 Self-heal Daemon on 10.70.47.151 N/A N/A Y 2717 Quota Daemon on 10.70.47.151 N/A N/A Y 8184 Snapshot Daemon on 10.70.47.145 49156 0 Y 2000 NFS Server on 10.70.47.145 2049 0 Y 1952 Self-heal Daemon on 10.70.47.145 N/A N/A Y 1965 Quota Daemon on 10.70.47.145 N/A N/A N N/A Task Status of Volume vol0 ------------------------------------------------------------------------------ There are no active volume tasks ======================================================== [root@localhost core]# gluster v info vol0 Volume Name: vol0 Type: Distributed-Replicate Volume ID: fc0f1280-821d-4990-a05a-00ccc9474b44 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.150:/rhs/brick1/b3 Brick4: 10.70.47.151:/rhs/brick1/b4 Brick5: 10.70.47.143:/rhs/brick2/b5 Brick6: 10.70.47.145:/rhs/brick2/b6 Brick7: 10.70.47.150:/rhs/brick2/b7 Brick8: 10.70.47.151:/rhs/brick2/b8 Brick9: 10.70.47.143:/rhs/brick3/b9 Brick10: 10.70.47.145:/rhs/brick3/10 Brick11: 10.70.47.150:/rhs/brick3/b11 Brick12: 10.70.47.151:/rhs/brick3/b12 Options Reconfigured: features.barrier: disable features.quota: on features.quota-deem-statfs: on features.uss: enable
REVIEW: http://review.gluster.org/10291 (geo-rep/cli : Fix geo-rep cli crash) posted (#1) for review on release-3.7 by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10291 (geo-rep/cli : Fix geo-rep cli crash) posted (#2) for review on release-3.7 by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/10291 committed in release-3.7 by Vijay Bellur (vbellur) ------ commit 11a179331ef428906a843b59c69c97f621446f9e Author: Kotresh HR <khiremat> Date: Thu Apr 16 12:11:24 2015 +0530 geo-rep/cli : Fix geo-rep cli crash Fixes crash dump when "gluster vol geo-rep <master-vol> status" is run because of incorrect argc and index comparison. Change-Id: Id14d63d020ad9c5951b54ef50e7c140e58d9d7a6 BUG: 1213048 Reviewed-on: http://review.gluster.org/10264 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/10291 Tested-by: NetBSD Build System Reviewed-by: Vijay Bellur <vbellur> Tested-by: Vijay Bellur <vbellur>
Verified with build : glusterfs-3.7.0beta1-0.3.git7aeae00.el6.x86_64 No Crash observed. Moving the bug to verified state. [root@georep1 ~]# gluster volume geo-replication master status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ----------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.101::slave Active N/A Changelog Crawl georep1 master /rhs/brick2/b2 root 10.70.46.101::slave Active N/A Changelog Crawl georep3 master /rhs/brick1/b1 root 10.70.46.154::slave Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.103::slave Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.103::slave Passive N/A N/A [root@georep1 ~]# gluster volume geo-replication status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ----------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.101::slave Active N/A Changelog Crawl georep1 master /rhs/brick2/b2 root 10.70.46.101::slave Active N/A Changelog Crawl georep3 master /rhs/brick1/b1 root 10.70.46.154::slave Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.103::slave Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.103::slave Passive N/A N/A [root@georep1 ~]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ----------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.101::slave Active N/A Changelog Crawl georep1 master /rhs/brick2/b2 root 10.70.46.101::slave Active N/A Changelog Crawl georep3 master /rhs/brick1/b1 root 10.70.46.154::slave Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.103::slave Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.103::slave Passive N/A N/A [root@georep1 ~]# gluster volume geo-replication master 10.70.46.154::slave status detail MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.101::slave Active N/A Changelog Crawl 0 0 0 0 georep1 master /rhs/brick2/b2 root 10.70.46.101::slave Active N/A Changelog Crawl 0 0 0 0 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave Passive N/A N/A 0 0 0 0 georep3 master /rhs/brick2/b2 root 10.70.46.154::slave Passive N/A N/A 0 0 0 0 georep2 master /rhs/brick1/b1 root 10.70.46.103::slave Passive N/A N/A 0 0 0 0 georep2 master /rhs/brick2/b2 root 10.70.46.103::slave Passive N/A N/A 0 0 0 0 [root@georep1 ~]#
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user