+++ This bug was initially created as a clone of Bug #1370410 +++ Description of problem: Same as Summary. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Have posted the patch on master for review - http://review.gluster.org/#/c/15747/1 It still needs lot of testing and reviews. Moving this bug to POST state in any case.
https://code.engineering.redhat.com/gerrit/91373 and https://code.engineering.redhat.com/gerrit/91374
Tested with RHGS 3.2.0 interim build ( glusterfs-3.8.4-8.el7rhgs ). When the volume state is 'created'. volume set command could be used to enable granular-entry-heal on the volume. Once the volume is started, then enabling granular-entry-heal via volume set throws helpful message to use 'gluster volume heal' for enabling granular entry heal. [root@~]# gluster volume set engine cluster.granular-entry-heal on volume set: failed: 'gluster volume set <VOLNAME> cluster.granular-entry-heal {enable, disable}' is not supported. Use 'gluster volume heal <VOLNAME> granular-entry-heal {enable, disable}' instead. I could able to enable granular-entry-heal using 'gluster volume heal' as follows: [root@ ~]# gluster volume heal engine granular-entry-heal enable Enable granular entry heal on volume engine has been successful [root@ ~]# gluster volume get engine cluster.granular-entry-heal Option Value ------ ----- cluster.granular-entry-heal on When the granular-entry-heal is enabled on the volume, while self-heal is in progress, proper warning is being thrown: [root@]# gluster volume heal rep3vol granular-entry-heal enable One or more entries need heal. Please execute the command again after there are no entries to be healed Volume heal failed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html