Description of problem: ===================== When the heal deamon is disabled using heal disable, and the admin tries to trigger a manual heal using "gluster v heal <vname> " command , following is the informational message admin gets Launching heal operation to perform index self heal on volume distrep has been unsuccessful on bricks that are down. Please check if all brick processes are running. This info message is not at all useful and is really confusing. Rather the admin must get a message saying the "self heal deamon is disabled, kindly start the deamon to trigger heal " Version-Release number of selected component (if applicable): ============================== 3.7.9-11 How reproducible: =============== easily Steps to Reproduce: 1.create an afr volume 2.disable heal deamon 3.trigger a manual heal
Moving to POST based on comment #2. Patch is https://review.gluster.org/#/c/15724/
Update: ========== > verified with build: glusterfs-server-3.12.2-4.el7rhgs.x86_64 > create 2 AFR volumes and disabled the self-heal for one volume ( distrep ) # gluster vol get distrep cluster.self-heal-daemon Option Value ------ ----- cluster.self-heal-daemon disable # > triggered heal for volume "distrep" # gluster vol heal distrep Launching heal operation to perform index self heal on volume distrep has been unsuccessful: Self-heal-daemon is disabled. Heal will not be triggered on volume distrep # > No impact for the volume where self heal daemon is enabled ( able to trigger heal ) Changing Status to Verified Changing status to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days