After upgrade of selinux-policy packages, please enable the gluster_use_execmem boolean to make the scenario work.
REVIEW: https://review.gluster.org/17806 (common-ha: enable and disable selinux gluster_use_execmem) posted (#1) for review on release-3.10 by Kaleb KEITHLEY (kkeithle)
REVIEW: https://review.gluster.org/17806 (common-ha: enable and disable selinux gluster_use_execmem) posted (#2) for review on release-3.10 by Kaleb KEITHLEY (kkeithle)
COMMIT: https://review.gluster.org/17806 committed in release-3.10 by Kaleb KEITHLEY (kkeithle) ------ commit da9f6e9a4123645a20b664a1c167599b64591f7c Author: Kaleb S. KEITHLEY <kkeithle> Date: Mon Jul 17 11:07:40 2017 -0400 common-ha: enable and disable selinux gluster_use_execmem Starting in Fedora 26 and RHEL 7.4 there are new targeted policies in selinux which include a tuneable to allow glusterd->ganesha-ha.sh->pcs to access the pcs config, i.e. gluster-use-execmem. Note. rpm doesn't have a way to distinguish between RHEL 7.3 or 7.4 or between 3.13.1-X and 3.13.1-Y so it can't be enabled for RHEL at this time. /usr/sbin/semanage is in policycoreutils-python in RHEL (versus policycoreutils-python-utils in Fedora.) Requires selinux-policy >= 3.13.1-160 in RHEL7. The corresponding version in Fedora 26 seems to be selinux-policy-3.13.1-259 or so. (Maybe earlier versions, but that's what was in F26 when I checked.) Change-Id: Ic474b3f7739ff5be1e99d94d00b55caae4ceb5a0 BUG: 1471917 Signed-off-by: Kaleb S. KEITHLEY <kkeithle> Reviewed-on: https://review.gluster.org/17806 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: soumya k <skoduri> Reviewed-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.5, please open a new bug report. glusterfs-3.10.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-August/000079.html [2] https://www.gluster.org/pipermail/gluster-users/