Bug 1261711

Summary: Problem with fence_virsh in RHEL 6 - selinux denial
Product: Red Hat Enterprise Linux 6 Reporter: Madison Kelly <mkelly>
Component: fence-agentsAssignee: Marek Grac <mgrac>
Status: CLOSED WONTFIX QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.7CC: cluster-maint, dlavu, jpokorny, rbalakri, tojeline
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1427986 (view as bug list) Environment:
Last Closed: 2017-12-06 10:38:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1427986    
Attachments:
Description Flags
Script validating some points in the connected comment none

Description Madison Kelly 2015-09-10 03:14:14 UTC
Description of problem:

  I've been using KVM-based VMs as a testbed for clusters for ages,
always using fence_virsh.

  I noticed today though that fence_virsh is now being blocked by
selinux (rhel 6.7, fully updated as of yesterday (2015-09-08):

====
type=AVC msg=audit(1441752343.878:3269): avc:  denied  { execute } for
pid=8848 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935
scontext=unconfined_u:system_r:fenced_t:s0
tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441752343.878:3269): arch=c000003e syscall=21
success=no exit=-13 a0=1a363a0 a1=1 a2=7f02aa7f89e8 a3=7ffdff0dc7c0
items=0 ppid=7759 pid=8848 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0
egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="fence_virsh"
exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
t
====

This is basically a copy of a thread I started on 'Cluster Labs - Users' list:

http://clusterlabs.org/pipermail/developers/2015-September/000076.html


Version-Release number of selected component (if applicable):

fence-agents-4.0.15-8.el6.x86_64
cman-3.0.12.1-73.el6.1.x86_64
corosync-1.4.7-2.el6.x86_64


How reproducible:

100%


Steps to Reproduce:
1. Assemble a 2-node cluster with cman (my cluster.conf is below)
2. Manually fence a node with 'fence_node foo'
3. Look at /var/log/messages and audit.log, manual fence succeeds, fenced-initiated fence fails with selinux denial.


Actual results:

Fencing is blocked


Expected results:

Fencing to work


Additional info:

====
[root@node1 ~]# ls -Z `which fence_virsh` `which ssh`
-rwxr-xr-x. root root system_u:object_r:ssh_exec_t:s0  /usr/bin/ssh
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/sbin/fence_virsh

[root@node1 ~]# restorecon -v `which fence_virsh` `which ssh`

[root@node1 ~]# ls -Z `which fence_virsh` `which ssh`
-rwxr-xr-x. root root system_u:object_r:ssh_exec_t:s0  /usr/bin/ssh
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/sbin/fence_virsh
====

I wiped audit.log, restarted auditd and then tried to fence manually.
Here is what I saw:

====
[root@node1 ~]# fence_node node2
fence node2 success
====

In messages:

====
Sep  9 02:53:30 node1 fence_node[23468]: fence node2 success
====

A few moments later, you can see in messages that corosync noticed the
loss of the node and tried to fence, but failed:

====

type=DAEMON_END msg=audit(1441767198.316:6153): auditd normal halt, sending auid=0 pid=23428 subj=unconfined_u:system_r:initrc_t:s0 res=success
type=DAEMON_START msg=audit(1441767198.441:4809): auditd start, ver=2.3.7 format=raw kernel=2.6.32-573.3.1.el6.x86_64 auid=0 pid=23452 subj=unconfined_u:system_r:auditd_t:s0 res=success
type=CONFIG_CHANGE msg=audit(1441767198.550:9350): audit_backlog_limit=320 old=320 auid=0 ses=2 subj=unconfined_u:system_r:auditctl_t:s0 res=1

type=AVC msg=audit(1441767220.374:9351): avc:  denied  { execute } for  pid=23523 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767220.374:9351): arch=c000003e syscall=21 success=no exit=-13 a0=10461a0 a1=1 a2=7f717ce339e8 a3=7fff0c670080 items=0 ppid=2879 pid=23523 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)

type=AVC msg=audit(1441767220.374:9352): avc:  denied  { execute } for  pid=23523 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767220.374:9352): arch=c000003e syscall=21 success=no exit=-13 a0=10461a0 a1=1 a2=7f717ce339e8 a3=7fff0c6700c8 items=0 ppid=2879 pid=23523 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767220.374:9353): avc:  denied  { execute } for  pid=23523 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767220.374:9353): arch=c000003e syscall=21 success=no exit=-13 a0=10461a0 a1=1 a2=7f717ce339e8 a3=7fff0c6700c8 items=0 ppid=2879 pid=23523 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767220.374:9354): avc:  denied  { execute } for  pid=23523 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767220.374:9354): arch=c000003e syscall=21 success=no exit=-13 a0=10461a0 a1=1 a2=7f717ce339e8 a3=7fff0c6700c8 items=0 ppid=2879 pid=23523 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767220.374:9355): avc:  denied  { execute } for  pid=23523 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767220.374:9355): arch=c000003e syscall=21 success=no exit=-13 a0=10461a0 a1=1 a2=7f717ce339e8 a3=7fff0c6700c8 items=0 ppid=2879 pid=23523 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767223.481:9356): avc:  denied  { execute } for  pid=23550 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767223.481:9356): arch=c000003e syscall=21 success=no exit=-13 a0=f631a0 a1=1 a2=7f66005349e8 a3=7ffebc634ad0 items=0 ppid=2879 pid=23550 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767223.481:9357): avc:  denied  { execute } for  pid=23550 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767223.481:9357): arch=c000003e syscall=21 success=no exit=-13 a0=f631a0 a1=1 a2=7f66005349e8 a3=7ffebc634b18 items=0 ppid=2879 pid=23550 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767223.481:9358): avc:  denied  { execute } for  pid=23550 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767223.481:9358): arch=c000003e syscall=21 success=no exit=-13 a0=f631a0 a1=1 a2=7f66005349e8 a3=7ffebc634b18 items=0 ppid=2879 pid=23550 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767223.481:9359): avc:  denied  { execute } for  pid=23550 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767223.481:9359): arch=c000003e syscall=21 success=no exit=-13 a0=f631a0 a1=1 a2=7f66005349e8 a3=7ffebc634b18 items=0 ppid=2879 pid=23550 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767223.481:9360): avc:  denied  { execute } for  pid=23550 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767223.481:9360): arch=c000003e syscall=21 success=no exit=-13 a0=f631a0 a1=1 a2=7f66005349e8 a3=7ffebc634b18 items=0 ppid=2879 pid=23550 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767226.595:9361): avc:  denied  { execute } for  pid=23575 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767226.595:9361): arch=c000003e syscall=21 success=no exit=-13 a0=df41a0 a1=1 a2=7f604b6d29e8 a3=7ffe8030d6c0 items=0 ppid=2879 pid=23575 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767226.595:9362): avc:  denied  { execute } for  pid=23575 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767226.595:9362): arch=c000003e syscall=21 success=no exit=-13 a0=df41a0 a1=1 a2=7f604b6d29e8 a3=7ffe8030d708 items=0 ppid=2879 pid=23575 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767226.595:9363): avc:  denied  { execute } for  pid=23575 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767226.595:9363): arch=c000003e syscall=21 success=no exit=-13 a0=df41a0 a1=1 a2=7f604b6d29e8 a3=7ffe8030d708 items=0 ppid=2879 pid=23575 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767226.595:9364): avc:  denied  { execute } for  pid=23575 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767226.595:9364): arch=c000003e syscall=21 success=no exit=-13 a0=df41a0 a1=1 a2=7f604b6d29e8 a3=7ffe8030d708 items=0 ppid=2879 pid=23575 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767226.595:9365): avc:  denied  { execute } for  pid=23575 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767226.595:9365): arch=c000003e syscall=21 success=no exit=-13 a0=df41a0 a1=1 a2=7f604b6d29e8 a3=7ffe8030d708 items=0 ppid=2879 pid=23575 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
====

I set selinux to permissive:

====
[root@node1 ~]# setenforce 1
====

And immediately the fence succeeded:

====
Sep  9 02:53:46 node1 dbus: avc:  received setenforce notice (enforcing=0)
Sep  9 02:53:52 node1 fenced[2879]: fence node2.ccrs.bcn success
====

Here is the audit.log after setting permissive mode:

====
type=MAC_STATUS msg=audit(1441767226.661:9366): enforcing=0 old_enforcing=1 auid=0 ses=2

type=SYSCALL msg=audit(1441767226.661:9366): arch=c000003e syscall=1 success=yes exit=1 a0=3 a1=7ffe514b9f30 a2=1 a3=7ffe514b8cb0 items=0 ppid=2625 pid=23581 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=2 comm="setenforce" exe="/usr/sbin/setenforce" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)
type=AVC msg=audit(1441767229.702:9367): avc:  denied  { execute } for  pid=23606 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767229.702:9367): arch=c000003e syscall=21 success=yes exit=0 a0=16a11a0 a1=1 a2=7f81b57009e8 a3=7ffc2776dc10 items=0 ppid=2879 pid=23606 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fence_virsh" exe="/usr/bin/python" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.705:9368): avc:  denied  { read open } for  pid=23611 comm="fence_virsh" name="ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=AVC msg=audit(1441767229.705:9368): avc:  denied  { execute_no_trans } for  pid=23611 comm="fence_virsh" path="/usr/bin/ssh" dev=vda2 ino=2103935 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
type=SYSCALL msg=audit(1441767229.705:9368): arch=c000003e syscall=59 success=yes exit=0 a0=169f4a0 a1=164ac60 a2=168b620 a3=7ffc2776dd50 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.707:9369): avc:  denied  { setuid } for  pid=23611 comm="ssh" capability=7  scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:fenced_t:s0 tclass=capability
type=SYSCALL msg=audit(1441767229.707:9369): arch=c000003e syscall=117 success=yes exit=0 a0=ffffffffffffffff a1=0 a2=ffffffffffffffff a3=3 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.708:9370): avc:  denied  { search } for  pid=23611 comm="ssh" name=".ssh" dev=vda2 ino=1966197 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_home_t:s0 tclass=dir
type=SYSCALL msg=audit(1441767229.708:9370): arch=c000003e syscall=2 success=no exit=-2 a0=7ffed853ecd0 a1=0 a2=1b6 a3=0 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.709:9371): avc:  denied  { name_connect } for  pid=23611 comm="ssh" dest=22 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1441767229.709:9371): arch=c000003e syscall=42 success=yes exit=0 a0=3 a1=7fa79084eb30 a2=10 a3=fffffffffffffee0 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.710:9372): avc:  denied  { setgid } for  pid=23611 comm="ssh" capability=6  scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:fenced_t:s0 tclass=capability
type=SYSCALL msg=audit(1441767229.710:9372): arch=c000003e syscall=119 success=yes exit=0 a0=0 a1=0 a2=0 a3=e items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.710:9373): avc:  denied  { getattr } for  pid=23611 comm="ssh" path="/root/.ssh" dev=vda2 ino=1966197 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:ssh_home_t:s0 tclass=dir
type=SYSCALL msg=audit(1441767229.710:9373): arch=c000003e syscall=4 success=yes exit=0 a0=7ffed853ecd0 a1=7ffed853ec40 a2=7ffed853ec40 a3=0 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.710:9374): avc:  denied  { read } for  pid=23611 comm="ssh" name="id_rsa" dev=vda2 ino=1966200 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:ssh_home_t:s0 tclass=file
type=AVC msg=audit(1441767229.710:9374): avc:  denied  { open } for  pid=23611 comm="ssh" name="id_rsa" dev=vda2 ino=1966200 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:ssh_home_t:s0 tclass=file
type=SYSCALL msg=audit(1441767229.710:9374): arch=c000003e syscall=2 success=yes exit=4 a0=7fa79084e920 a1=0 a2=0 a3=12 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1441767229.711:9375): avc:  denied  { getattr } for  pid=23611 comm="ssh" path="/root/.ssh/id_rsa" dev=vda2 ino=1966200 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:ssh_home_t:s0 tclass=file
type=SYSCALL msg=audit(1441767229.711:9375): arch=c000003e syscall=5 success=yes exit=0 a0=4 a1=7ffed853d3d0 a2=7ffed853d3d0 a3=12 items=0 ppid=23606 pid=23611 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=2 comm="ssh" exe="/usr/bin/ssh" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
====

ere is my cluster.conf, in case it matters:

====
[root@node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="ccrs" config_version="1">
	<cman expected_votes="1" two_node="1" />
	<clusternodes>
		<clusternode name="node1.ccrs.bcn" nodeid="1">
			<altname name="node1.sn" />
			<fence>
				<method name="kvm">
					<device name="kvm_host" port="an-a02n01" delay="15" action="reboot" />
				</method>
			</fence>
		</clusternode>
		<clusternode name="node2.ccrs.bcn" nodeid="2">
			<altname name="node2.sn" />
			<fence>
				<method name="kvm">
					<device name="kvm_host" port="an-a02n02" action="reboot" />
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices>
		<fencedevice name="kvm_host" agent="fence_virsh"
ipaddr="192.168.122.1" login="root" passwd="it's a secret" />
	</fencedevices>
	<fence_daemon post_join_delay="30" />
	<totem rrp_mode="active" secauth="off"/>
	<rm log_level="5">
		<resources>
			<script file="/etc/init.d/drbd" name="drbd"/>
			<script file="/etc/init.d/wait-for-drbd" name="wait-for-drbd"/>
			<script file="/etc/init.d/clvmd" name="clvmd"/>
			<clusterfs device="/dev/node1_vg0/shared" force_unmount="1"
fstype="gfs2" mountpoint="/shared" name="sharedfs" />
			<script file="/etc/init.d/libvirtd" name="libvirtd"/>
		</resources>
		<failoverdomains>
			<failoverdomain name="only_n01" nofailback="1" ordered="0"
restricted="1">
				<failoverdomainnode name="node1.ccrs.bcn"/>
			</failoverdomain>
			<failoverdomain name="only_n02" nofailback="1" ordered="0"
restricted="1">
				<failoverdomainnode name="node2.ccrs.bcn"/>
			</failoverdomain>
			<failoverdomain name="primary_n01" nofailback="1" ordered="1"
restricted="1">
				<failoverdomainnode name="node1.ccrs.bcn" priority="1"/>
				<failoverdomainnode name="node2.ccrs.bcn" priority="2"/>
			</failoverdomain>
			<failoverdomain name="primary_n02" nofailback="1" ordered="1"
restricted="1">
				<failoverdomainnode name="node1.ccrs.bcn" priority="2"/>
				<failoverdomainnode name="node2.ccrs.bcn" priority="1"/>
			</failoverdomain>
		</failoverdomains>
		<service name="storage_n01" autostart="1" domain="only_n01"
exclusive="0" recovery="restart">
			<script ref="drbd">
				<script ref="wait-for-drbd">
					<script ref="clvmd">
						<clusterfs ref="sharedfs"/>
					</script>
				</script>
			</script>
		</service>
		<service name="storage_n02" autostart="1" domain="only_n02"
exclusive="0" recovery="restart">
			<script ref="drbd">
				<script ref="wait-for-drbd">
					<script ref="clvmd">
						<clusterfs ref="sharedfs"/>
					</script>
				</script>
			</script>
		</service>
		<service name="libvirtd_n01" autostart="1" domain="only_n01"
exclusive="0" recovery="restart">
			<script ref="libvirtd"/>
		</service>
		<service name="libvirtd_n02" autostart="1" domain="only_n02"
exclusive="0" recovery="restart">
			<script ref="libvirtd"/>
		</service>
	</rm>
</cluster>
====

Comment 2 Dan Lavu 2015-09-10 03:39:02 UTC
Noticed that there is a boolean to permit ssh_t access to the fenced_t, can you please try enabling it?

setsebool -P fenced_can_ssh on

And what are the results? Thanks.

Comment 3 Madison Kelly 2015-09-10 04:21:48 UTC
That appears to have fixed it.

====
[root@node1 ~]# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          permissive
Policy version:                 24
Policy from config file:        targeted
====

====
[root@node1 ~]# setsebool -P fenced_can_ssh on
[root@node1 ~]# ls -Z `which fence_virsh` `which fence_ipmilan` `which ssh`
-rwxr-xr-x. root root system_u:object_r:ssh_exec_t:s0  /usr/bin/ssh
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/sbin/fence_ipmilan
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/sbin/fence_virsh
====

====
[root@node1 ~]# clustat
Cluster Status for ccrs @ Thu Sep 10 04:19:19 2015
Member Status: Quorate

 Member Name                                            ID   Status
 ------ ----                                            ---- ------
 node1.ccrs.bcn                                             1 Online, Local
 node2.ccrs.bcn                                             2 Online
====

====
[root@node1 ~]# fence_node node2
fence node2 success
====

syslog:

====
Sep 10 04:18:44 node1 dbus: avc:  received policyload notice (seqno=2)
Sep 10 04:18:44 node1 dbus: [system] Reloaded configuration
Sep 10 04:18:44 node1 setsebool: The fenced_can_ssh policy boolean was changed to on by root
====

Manual fence call:
====
Sep 10 04:19:28 node1 fence_node[27458]: fence node2 success
====

Corosync-initiated fence:

====
Sep 10 04:19:35 node1 corosync[2792]:   [TOTEM ] A processor failed, forming new configuration.
Sep 10 04:19:37 node1 corosync[2792]:   [QUORUM] Members[1]: 1
Sep 10 04:19:37 node1 corosync[2792]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep 10 04:19:37 node1 corosync[2792]:   [CPG   ] chosen downlist: sender r(0) ip(10.20.10.1) r(1) ip(10.10.10.1) ; members(old:2 left:1)
Sep 10 04:19:37 node1 corosync[2792]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep 10 04:19:37 node1 kernel: dlm: closing connection to node 2
Sep 10 04:19:37 node1 fenced[2879]: node_history_fence_external no nodeid -1
Sep 10 04:19:37 node1 fenced[2879]: fencing node node2.ccrs.bcn
Sep 10 04:19:40 node1 fenced[2879]: fence node2.ccrs.bcn success
====

Excellent!

Now, is this a bug to be fixed in the package, or something I need to manually do when I setup the clusters in the future?

Comment 4 Dan Lavu 2015-09-10 04:43:56 UTC
This will be something you'd want to add to your configuration steps. It is a boolean to be enabled, not all clusters use this agent so I don't think we can convince the SELinux folks to enable it by default.

Comment 5 Madison Kelly 2015-09-10 04:47:06 UTC
Could the fence-agents package set it in the post section of the RPM? I could imagine many users hitting this and being stumped...

Comment 6 Dan Lavu 2015-09-10 08:00:13 UTC
Digimer,

I agree, it is something that should be documented better, the official documentation makes a note about the following selinux boolean fenced_can_network_connect and fence_xvm.

Maybe file an RFE to have it added to the documentation and to the fence_agent directly.

Comment 9 Marek Grac 2015-09-14 08:58:48 UTC
@Digimer:

I'm not sure about adding it to the post-section because fence agents are integrated also with other packages (e.g. RHEV, OpenStack) where this setup will open security more than required. The best place where to do it is probably in pcs which can allow when fence agent is really used in cluster.

I will investigate that a bit more.

Comment 10 Marek Grac 2015-09-14 14:27:04 UTC
@Digimer:

I have tested 6.7, 6.6, 6.5, 6.4 (I don't have installed earlier versions) and fenced_can_ssh is there in every version (and is off by default). So this does not look like regression to me.

Comment 11 Madison Kelly 2015-09-14 15:27:40 UTC
@Marek:

I totally believe you, but somehow, it started to be a problem for me very recently only. I've used this setup to test HA on VMs for quite some time and never hit this selinux issue. I am curious/worried on what might have changed.

Re: "The best place where to do it is probably in pcs";

Please don't forget rgmanager. :)

For me personally, I now have our installer enabling fenced_can_ssh, so for me, the issue is fixed. Can I make a suggestion though? Can you have fence_virsh check and, if that isn't set, log a more useful error message telling the user what is wrong?

Comment 12 Jan Pokorný [poki] 2015-09-14 17:05:50 UTC
Marku, it's a fence agents vs SELinux integration issue, please do not
try to push changes elsewhere.  It's simply not manageable/scalable,
in addition to digimer's point wrt. rgmanager:

- do you really want pcs to play SELinux magic across all nodes
  at arbitrary configuration, over and over, just to be sure?
  pcs is a management tool, not a tool for workarounding distribution
  not playing well together out of the box

- this applies to RHEL distributions only, everything beside SELinux
  policy tends to be suitable for general consumption by arbitrary
  distros (and distro specific patches is something one wants to
  avoid as much as possible)

Comment 13 Marek Grac 2015-09-15 07:43:01 UTC
@Jan:

pcs & pacemaker are tools that induce usage of fenced_can_ssh (and others) so they should be one that allow them in SELinux. So, I don't think that putting it there is work-around, it is more standard integration stuff. 

From the fence agents perspective, we can include all required info in metadata. Configurable at build time, so it won't be distro specific at all.

Proposition made by Tomas Jelinek to do it in post-install script of pcs is an acceptable solution for both pcs and users.

@Digimer:

IMHO it is not possible for applications to find out that SELinux blocked something. I can improve error message to contain info about SELinux booleans, does it sounds good?

Comment 14 Tomas Jelinek 2015-09-15 09:17:54 UTC
I was saying to do it in post-install script of particular fence agents not pcs. It does not make any sense to do this in post-install of pcs. For example one can run pcs on a host which is not part of a cluster at all (and use the host to manage other clusters via pcsd web UI).

Comment 15 Madison Kelly 2015-09-15 13:30:07 UTC
@Marek

I'm all for anything that helps a user realize the source of a failure more easily. Adding something like "Hint, check 'getsebool fenced_can_ssh', it needs to be 'on'." if selinux is enforcing? Whatever makes sense to you.

@Thomas

> I was saying to do it in post-install script of particular fence agents

Makes logical sense to me.

Comment 16 Jan Pokorný [poki] 2015-09-15 19:43:03 UTC
Created attachment 1073829 [details]
Script validating some points in the connected comment

Couple of notes here:

1. note that setting a SELinux boolean is an expensive operation,
   "/usr/sbin/setsebool -P fenced_can_ssh on" takes ca. 35 seconds
   in my VM

2. setsebool doesn't take a current state into account, it will
   do this expensive operation even if it's not necessary
   (already enabled)

1. + 2. --> you always want to run:
    LANG=C /usr/sbin/getsebool fenced_can_ssh | grep -qE 'on$' \
      || /usr/sbin/setsebool -P fenced_can_ssh on

---

3. fencing library (core of the fence agents) does *NOT* do a proper
   job of detecting whether the commands to be used are actually
   regular existing files, and if so, whether they are executable

4. python-pexpect package will then not save the situation (either
   the errors/exceptions are not propagated correctly or fencing
   library doesn't handle them well)

5. from SELinux vs. Python perspective, one can check without a risk
   of exception:
   - os.access(path, os.F_OK) <-- whether file exists at all, should
                                  report true value for /usr/bin/ssh
                                  even for fenced_t process
   - os.access(path, os.X_OK) <-- whether executable file exists, should
                                  report false value for /usr/bin/ssh
                                  for fenced_t process, even if it
                                  exists and is executable
   and while being prepared for OSError exception:
   - os.stat(path) <-- wrapper around stat(3), for /usr/bin/ssh in
                       in fenced_t process, it will raise that exception,
                       if its errno==13 (errno.EACCES), the cause is
                       allegedly (not 100%) SELinux 
   (similarly with subprocess.Popen, but this one has the undesired
   side-effect of actually running the executable if possible)

   FWIW, this partially refute claim from [comment 13]:

> IMHO it is not possible for applications to find out that SELinux
> blocked something. I can improve error message to contain info about
> SELinux booleans, does it sounds good?

   as os.stat + OSError + errno == 13 is quite a reliable combination
   to detect SELinux silently ruling

3. + 4. + 5. --> there is an apparent technical solution for when to
   proceed with [comment 15]:

> anything that helps a user realize the source of a failure more easily. 

---

Attached is a script used to validate some points.  To use it,
the environment has to be prepared in a special way;
SELinux has to enabled, but permissive only (setenforce 0) as the
artificial transition of the context (as with runcon utility) is,
apparently, also a protected action.

Hence the sequence to play with the script, with setsebool line commented
or not for good measure, is:
# wget https://bugzilla.redhat.com/<ATTACHMENT> -O test.py
# setenforce 0
# runcon system_u:object_r:fenced_t:s0 /usr/bin/python test.py \
  /usr/bin/ssh -V

Results with setsebool line commented out:

> Exist and Executable and Executable: /usr/bin/ssh
> Running: /usr/sbin/setenforce 1
> Process returned: 0
> SELinux switch to enforcing
> System refused access to `/usr/bin/ssh', perhaps due to SELinux
> Exist and Not executable and Not executable: /usr/bin/ssh
> Running: /usr/bin/ssh -V
> System refused to execute `/usr/bin/ssh -V', perhaps due to SELinux

and when not:

> Exist and Executable and Executable: /usr/bin/ssh
> Running: /usr/sbin/setsebool -P fenced_can_ssh on
> Process returned: 0
> Running: /usr/sbin/setenforce 1
> Process returned: 0
> SELinux switch to enforcing
> Exist and Executable and Executable: /usr/bin/ssh
> Running: /usr/bin/ssh -V
> OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
> Process returned: 0

Comment 17 Jan Pokorný [poki] 2015-09-15 19:45:09 UTC
Re debate as to which package should proceed with setsebool command
in its post-install scriptlet:

Let me suggest alternative to sticking with pcs/pacemaker:

- fence-agents.srpm will spawn another, dedicated subpackage,
  say fence-agents-cluster, that will be empty and only
  exist for that post-install command, plus postun scriptlet
  will contain a command that reverts that, i.e.,
  /usr/sbin/setsebool -P fenced_can_ssh on

- we can then decide whether pacemaker should require fence-agents-cluster
  or "please install fence-agents-cluster" will be a documented solution
  for the title issue, either way, everybody could be happy again,
  without tainting specfiles of distinct components

Comment 18 Jan Pokorný [poki] 2015-09-15 19:46:34 UTC
Re [comment 17]:

reverting command is apparently:
/usr/sbin/setsebool -P fenced_can_ssh off

Comment 19 Madison Kelly 2015-09-15 19:54:06 UTC
This has slid above my coding skills, so let me take a step back and say that my only real concern is; If/when fence_virsh fails, something hints the user in /var/log/messages to look at fenced_can_ssh. As a user, the time and effort needed to diagnose this and find the right magical incantation was the hard part, not the ~30 seconds it took to actually run setsebool. Anything done in addition to avoid it failing in the first place is icing on the cake.

Comment 21 Jan Kurik 2017-12-06 10:38:08 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/