Description of problem: When you try to stop volume, glusterd show below message even though bitrot and scrub is not enabled on volume ================================ [2015-06-15 08:43:02.041821] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2015-06-15 08:43:02.041949] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped Version-Release number of selected component (if applicable): [root@darkknightrises ~]# rpm -qa | grep glusterfs glusterfs-client-xlators-3.7.1-2.el6rhs.x86_64 glusterfs-server-3.7.1-2.el6rhs.x86_64 glusterfs-3.7.1-2.el6rhs.x86_64 glusterfs-api-3.7.1-2.el6rhs.x86_64 glusterfs-cli-3.7.1-2.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-2.el6rhs.x86_64 glusterfs-libs-3.7.1-2.el6rhs.x86_64 glusterfs-fuse-3.7.1-2.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-2.el6rhs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 2*2 distribute replicate volume 2. Start the volume 3. Stop the volume Actual results: var/log/glusterfs/etc-glusterfs-glusterd.vol.log ======================================================== 2015-06-15 08:43:02.041821] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2015-06-15 08:43:02.041949] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped Expected results: Log message should not appear in logs Additional info: [root@darkknightrises ~]# gluster v info vol Volume Name: vol Type: Distributed-Replicate Volume ID: 9abdd9b9-53b6-492a-afc8-fb96975a1b8f Status: Stopped Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.33.214:/rhs/brick1/b01 Brick2: 10.70.33.219:/rhs/brick1/b02 Brick3: 10.70.33.225:/rhs/brick1/b03 Brick4: 10.70.44.13:/rhs/brick1/b04 Options Reconfigured: performance.readdir-ahead: on cluster.enable-shared-storage: disable snap-max-hard-limit: 200 auto-delete: enable snap-max-soft-limit: 70
upstream patch http://review.gluster.org/#/c/11226/ already available for this bug.
The bug is not happening on the latest RHGS 3.3.x releases.