+++ This bug was initially created as a clone of Bug #1265479 +++ +++ This bug was initially created as a clone of Bug #1234995 +++ Description of problem: When you have distribute or disperse volume, and you set data-self-heal, metadata-self-heal and entry-self-heal options on/off it succeeds. Since when you try to set self-heal-daemon off/on on distribute, disperse volume , it give error self-heal-daemon can be set on distribute-replicate volume. Same error should be displayed when user tries to set data-self-heal, metadata-self-heal and entry-self-heal options on/off. Version-Release number of selected component (if applicable): [root@darkknightrises ~]# rpm -qa | grep glusterfs glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64 glusterfs-cli-3.7.1-4.el6rhs.x86_64 samba-vfs-glusterfs-4.1.17-7.el6rhs.x86_64 glusterfs-libs-3.7.1-4.el6rhs.x86_64 glusterfs-3.7.1-4.el6rhs.x86_64 glusterfs-api-3.7.1-4.el6rhs.x86_64 glusterfs-server-3.7.1-4.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-4.el6rhs.x86_64 glusterfs-fuse-3.7.1-4.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-4.el6rhs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create distribute volume 2. set data-self-heal, metadata-self-heal and entry-self-heal option on/off. e.g gluster v set <vol-name> data-self-heal off 3. Actual results: Commands succeeds Expected results: Check should there, if there is distribute-replicate volume , then only data-self-heal, metadata-self-heal and entry-self-heal option should be allowed to set --- Additional comment from Sakshi on 2015-11-02 01:07:02 EST --- Patch upstream http://review.gluster.org/12215
REVIEW: http://review.gluster.org/13444 (glusterd: validate function for replica volume options) posted (#1) for review on release-3.7 by Sakshi Bansal
COMMIT: http://review.gluster.org/13444 committed in release-3.7 by Atin Mukherjee (amukherj) ------ commit 976c852eeb8af0abfad8862e5b53e3d82c79ee98 Author: Sakshi <sabansal> Date: Wed Sep 23 15:16:34 2015 +0530 glusterd: validate function for replica volume options Backport of http://review.gluster.org/#/c/12215/ > Change-Id: I5b4a28db101e9f7e07f4b388c7a2594051c9e8dd > BUG: 1265479 > Signed-off-by: Sakshi <sabansal> > Reviewed-on: http://review.gluster.org/12215 > Tested-by: NetBSD Build System <jenkins.org> > Tested-by: Gluster Build System <jenkins.com> > Reviewed-by: Atin Mukherjee <amukherj> BUG: 1308414 Change-Id: I1ce7c326da82749f8fd13dff11b803c607c853bb Signed-off-by: Sakshi <sabansal> Reviewed-on: http://review.gluster.org/13444 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report. glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user