Hide Forgot
Description of problem: pacemaker (possible rgmanager replacement) does not have any policy defined and starts in initrc_t. Version-Release number of selected component (if applicable): selinux-policy-3.7.19-139.el6.noarch How reproducible: always Steps to Reproduce: 1. service pacemaker start 2. 3. Actual results: wrong context Expected results: pacemaker's own context for all daemons with proper permissions (TBD) Additional info:
Jaroslav, so is it going to work as rgmanager? Probably you could try to treat it with rgmanager using $ chcon -t rgmanager_exec_t /usr/sbin/pacemakerd We added pacemaker policy to Fedora but if this policy ends up unconfined as rgmanager we will treat it with rgmanager.
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux.
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.
Stupid question, have we tried it with some cluster services (like an IP address) configured?
Created attachment 672367 [details] Working policy module This one can be considered as a starting point, because I use it with corosync2 but not with cman. Please look at thread http://comments.gmane.org/gmane.linux.highavailability.pacemaker/15817 for more details.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0314.html
How come this got closed? Can we get some feedback on the patch in comment #17 please? A cluster that can't control resources isn't very useful. Do we need to clone this into 6.4 or something?
This bug was about a new policy which we added. Please open a new one. Thank you.