RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1244272 - [SELinux] nfs-ganesha: AVC denied for nfs-ganesha.service , ganesha cluster setup fails in Rhel7
Summary: [SELinux] nfs-ganesha: AVC denied for nfs-ganesha.service , ganesha cluster s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.1
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1212796 1242487 1248653
TreeView+ depends on / blocked
 
Reported: 2015-07-17 16:18 UTC by Prasanth
Modified: 2015-11-19 10:41 UTC (History)
18 users (show)

Fixed In Version: selinux-policy-3.13.1-35.el7
Doc Type: Bug Fix
Doc Text:
Attempting to set up Gluster storage on an NFS-Ganesha cluster previously failed due to an Access Vector Cache (AVC) denial error. The responsible SELinux policy has been adjusted to allow handling of volumes mounted by NFS-Ganesha, and the described failure no longer occurs.
Clone Of: 1242487
: 1248653 (view as bug list)
Environment:
Last Closed: 2015-11-19 10:41:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
audit.log (936.65 KB, text/plain)
2015-07-23 10:49 UTC, Apeksha
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2300 0 normal SHIPPED_LIVE selinux-policy bug fix update 2015-11-19 09:55:26 UTC

Description Prasanth 2015-07-17 16:18:38 UTC
+++ This bug was initially created as a clone of Bug #1242487 +++

Description of problem:
Selinux: AVC denied for nfs-ganesha.service , ganesha cluster setup fails in Rhel7

Version-Release number of selected component (if applicable):
selinux-policy-3.13.1-31.el7.noarch
glusterfs-3.7.1-9.el7rhgs.x86_64
nfs-ganesha-2.2.0-5.el7rhgs.x86_64

How reproducible: Always


Steps to Reproduce:
1. gluster nfs-ganesha enable command fails.
[root@nfs1 ~]# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: NFS-Ganesha failed to start.Please see log file for details

following avc denied message found in audit.log:

type=USER_AVC msg=audit(1436750416.293:3599): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { start } for auid=-1 uid=0 gid=0 path="/usr/lib/systemd/system/nfs-ganesha.service" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:nfsd_unit_file_t:s0 tclass=service  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'


Actual results: Ganesha Cluster setup fails due to denied avc errors


Expected results: No avc denied errors


Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-13 08:19:19 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Milos Malik on 2015-07-13 08:50:13 EDT ---

# cat bz1242487.te 
policy_module(bz1242487,1.0)

require {
  type glusterd_t;
  type nfsd_unit_file_t;
  class service { start stop status enable disable load reload };
}

allow glusterd_t nfsd_unit_file_t : service { start stop status enable disable load reload };

# make -f /usr/share/selinux/devel/Makefile 
Compiling targeted bz1242487 module
/usr/bin/checkmodule:  loading policy configuration from tmp/bz1242487.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 17) to tmp/bz1242487.mod
Creating targeted bz1242487.pp policy package
rm tmp/bz1242487.mod tmp/bz1242487.mod.fc
# semodule -i bz1242487.pp 
#

Does it work now?

--- Additional comment from Apeksha on 2015-07-13 11:01:02 EDT ---

Yes, with the local fix i am able to set up the ganesha cluster and not seeing avc denied for nfs-ganesha service.

But i am seeing 3 more new AVC denied errors:

1.  type=AVC msg=audit(1436760013.962:3807): avc:  denied  { read } for  pid=13677 comm="find" name="sepolgen" dev="dm-0" ino=135293101 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=dir

2.  type=AVC msg=audit(1436760018.523:3817): avc:  denied  { connectto } for  pid=13746 comm="crm_mon" path=006369625F726F0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=unix_stream_socket

3.  type=AVC msg=audit(1436760019.680:3818): avc:  denied  { connectto } for  pid=13750 comm="cibadmin" path=006369625F72770000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=unix_stream_socket

--- Additional comment from Milos Malik on 2015-07-13 11:11:43 EDT ---

# cat bz1242487.te 
policy_module(bz1242487,1.0)

require {
  type glusterd_t;
  type cluster_t;
  type nfsd_unit_file_t;
  class service { start stop status enable disable load reload };
  class unix_stream_socket { connectto };
}

allow glusterd_t nfsd_unit_file_t : service { start stop status enable disable load reload };
allow glusterd_t cluster_t : unix_stream_socket { connectto };

# make -f /usr/share/selinux/devel/Makefile 
Compiling targeted bz1242487 module
/usr/bin/checkmodule:  loading policy configuration from tmp/bz1242487.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 17) to tmp/bz1242487.mod
Creating targeted bz1242487.pp policy package
rm tmp/bz1242487.mod tmp/bz1242487.mod.fc
# semodule -i bz1242487.pp 
#

--- Additional comment from Milos Malik on 2015-07-13 11:19:28 EDT ---

Is Ganesha able to start various cluster services ? Does Ganesha use init scripts or systemd unit files when starting them ?

--- Additional comment from Meghana on 2015-07-13 11:55:34 EDT ---

Hi Milos,

We execute "service nfs-ganesha start" to start the NFS-Ganesha service. And after that, as part of the set up, we run various pcs commands to set up the cluster. crm_mon errors are related to corosync/pacemaker as far as I can see.
The set up would have failed because NFS-GAnesha didn't start in the first place.

--- Additional comment from Apeksha on 2015-07-13 11:58:39 EDT ---

I was able to setup the cluster with the work around in comment 2.
But yes i am seeing 3 more avc denied errors as mentioned in comment 3.

--- Additional comment from Prasanth on 2015-07-14 01:32:58 EDT ---

(In reply to Apeksha from comment #7)
> I was able to setup the cluster with the work around in comment 2.
> But yes i am seeing 3 more avc denied errors as mentioned in comment 3.

Apeksha, Milos has provided an updated local policy module in Comment 4 which should resolve the 3 other AVC's you had seen. So please apply that and let us know the test results.

--- Additional comment from Apeksha on 2015-07-14 02:04:49 EDT ---

With the work around mentioned in comment 4, i am able to set up the ganesha clutser, but seeing 2 avc errors:

1. type=AVC msg=audit(1436813573.923:5217): avc:  denied  { read } for  pid=16855 comm="find" name="sepolgen" dev="dm-0" ino=135293101 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=dir

2. type=USER_AVC msg=audit(1436813579.129:5220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=-1 uid=0 gid=0 path="system" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=system  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-14 07:20:20 EDT ---

Since this bug has been approved for the Red Hat Gluster Storage 3.1.0 release, through release flag 'rhgs-3.1.0+', the Target Release is being automatically set to 'RHGS 3.1.0'

--- Additional comment from Apeksha on 2015-07-17 05:23:45 EDT ---

seeing this avc denied error on a fresh rhel7.1 setup with latest selinux rpm - selinux-policy-3.13.1-32.el7.noarch

type=USER_AVC msg=audit(1437124950.248:2418): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { start } for auid=-1 uid=0 gid=0 path="/usr/lib/systemd/system/nfs-ganesha.service" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:nfsd_unit_file_t:s0 tclass=service  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

Is this avc message fixed in this rpm - selinux-policy-3.13.1-32.el7.noarch ?

Or we have to still use the workaround mentioned in comment2/comment4?

--- Additional comment from Milos Malik on 2015-07-17 05:33:14 EDT ---

# rpm -qa selinux-policy\*
selinux-policy-minimum-3.13.1-33.el7.noarch
selinux-policy-sandbox-3.13.1-33.el7.noarch
selinux-policy-doc-3.13.1-33.el7.noarch
selinux-policy-3.13.1-33.el7.noarch
selinux-policy-targeted-3.13.1-33.el7.noarch
selinux-policy-devel-3.13.1-33.el7.noarch
selinux-policy-mls-3.13.1-33.el7.noarch
# sesearch -s glusterd_t -t nfsd_unit_file_t -c service -A -C

# sesearch -s glusterd_t -t nfsd_unit_file_t -c service -D -C

# 

Unfortunately, the workaround is still needed.

--- Additional comment from Prasanth on 2015-07-17 07:31:18 EDT ---

(In reply to Apeksha from comment #11)
> seeing this avc denied error on a fresh rhel7.1 setup with latest selinux
> rpm - selinux-policy-3.13.1-32.el7.noarch
> 
> type=USER_AVC msg=audit(1437124950.248:2418): pid=1 uid=0 auid=4294967295
> ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { start }
> for auid=-1 uid=0 gid=0 path="/usr/lib/systemd/system/nfs-ganesha.service"
> scontext=system_u:system_r:glusterd_t:s0
> tcontext=system_u:object_r:nfsd_unit_file_t:s0 tclass=service 
> exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
> 
> Is this avc message fixed in this rpm - selinux-policy-3.13.1-32.el7.noarch ?
> 
> Or we have to still use the workaround mentioned in comment2/comment4?

The latest available policy is selinux-policy-3.13.1-33.el7. Are you see this issue in that as well? Please check and confirm.

--- Additional comment from Prasanth on 2015-07-17 07:56:38 EDT ---

Apeksha, what I understood from Milos is that the workaround from comment#4 will be needed until mgrepl creates a new build -34.el7 which will have all the fixes.

--- Additional comment from Saurabh on 2015-07-17 09:31:08 EDT ---

(In reply to Prasanth from comment #13)
> (In reply to Apeksha from comment #11)
> > seeing this avc denied error on a fresh rhel7.1 setup with latest selinux
> > rpm - selinux-policy-3.13.1-32.el7.noarch
> > 
> > type=USER_AVC msg=audit(1437124950.248:2418): pid=1 uid=0 auid=4294967295
> > ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { start }
> > for auid=-1 uid=0 gid=0 path="/usr/lib/systemd/system/nfs-ganesha.service"
> > scontext=system_u:system_r:glusterd_t:s0
> > tcontext=system_u:object_r:nfsd_unit_file_t:s0 tclass=service 
> > exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
> > 
> > Is this avc message fixed in this rpm - selinux-policy-3.13.1-32.el7.noarch ?
> > 
> > Or we have to still use the workaround mentioned in comment2/comment4?
> 
> The latest available policy is selinux-policy-3.13.1-33.el7. Are you see
> this issue in that as well? Please check and confirm.

Yes, she has seen the issue with the latest rpms also, she had to put the workaround as mentioned above in this BZ. 

So it will be preferrable that we have the workaround in rpms.
Milos, can you confirm that we will have rpms having the fix for the issue so that we can avoid using the workaround?

--- Additional comment from Milos Malik on 2015-07-17 11:49:11 EDT ---

My plan is to persuade mgrepl to put as many fixes as possible into selinux-policy builds so that you don't need to use workarounds.

Comment 1 Miroslav Grepl 2015-07-20 12:32:31 UTC
commit 285a53012c81e74c5b86480e90649165566b7f7f
Author: Miroslav Grepl <mgrepl>
Date:   Mon Jul 20 14:08:03 2015 +0200

    Allow glusterd to manage nfsd and rpcd services.

Comment 2 Apeksha 2015-07-23 10:42:35 UTC
Seeing avc errors in selinux-policy-3.13.1-34.el7:

1. type=AVC msg=audit(07/23/2015 05:07:56.705:3038) : avc:  denied  { getattr } for  pid=17162 comm=find path=/var/lib/libvirt/qemu/capabilities.monitor.sock dev="dm-1" ino=1179980 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file 
2. type=AVC msg=audit(07/23/2015 05:08:01.303:3054) : avc:  denied  { connectto } for  pid=17229 comm=crm_mon path=cib_ro scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=unix_stream_socket
3. type=AVC msg=audit(07/23/2015 05:08:03.890:3065) : avc:  denied  { connectto } for  pid=17250 comm=cibadmin path=cib_rw scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:cluster_t:s0 tclass=unix_stream_socket

Seeing the same avc's in selinux-policy-3.13.1-23.el7_1.12.noarch which is latest selinux rpm in Rhel7.1rhs iso


[root@nfs1 ~]# ausearch -m avc -m user_avc -m selinux_err -i -ts today|audit2allow


#============= fprintd_t ==============
allow fprintd_t cluster_t:dbus send_msg;

#============= glusterd_t ==============

#!!!! This avc can be allowed using the boolean 'daemons_enable_cluster_mode'
allow glusterd_t cluster_t:unix_stream_socket connectto;
allow glusterd_t qemu_var_run_t:sock_file getattr;

Comment 3 Apeksha 2015-07-23 10:49:04 UTC
Created attachment 1055281 [details]
audit.log

Comment 4 Miroslav Grepl 2015-07-23 11:59:43 UTC
1. type=AVC msg=audit(07/23/2015 05:07:56.705:3038) : avc:  denied  { getattr } for  pid=17162 comm=find path=/var/lib/libvirt/qemu/capabilities.monitor.sock dev="dm-1" ino=1179980 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file 

is this expected behaviour? I mean looking for /var/lib/libvirt/qemu/capabilities.monitor.sock?

Comment 5 Kaleb KEITHLEY 2015-07-23 15:08:00 UTC
Do we know the parent process of the find command?

Off hand I'd say it's not expected. I can't imagine why anything in pacemaker would be running a find like that.

Comment 6 Saurabh 2015-07-24 06:48:43 UTC
(In reply to Miroslav Grepl from comment #4)
> 1. type=AVC msg=audit(07/23/2015 05:07:56.705:3038) : avc:  denied  {
> getattr } for  pid=17162 comm=find
> path=/var/lib/libvirt/qemu/capabilities.monitor.sock dev="dm-1" ino=1179980
> scontext=system_u:system_r:glusterd_t:s0
> tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file 
> 
> is this expected behaviour? I mean looking for
> /var/lib/libvirt/qemu/capabilities.monitor.sock?


Kaleb has already updated the response in comment#5 and we are also not sure that it is expected  or not.


Going forward, we updated the selinux-ploicy* "3.13.1-34.el7" version, this includes updating, 
1. selinux-policy
2. selinux-policy-targeted
3. selinux-policy-devel

Then we applied the workaround as provided earlier i.e.
cat bz1242487.te 
policy_module(bz1242487,1.0)

require {
  type glusterd_t;
  type cluster_t;
  type nfsd_unit_file_t;
  class unix_stream_socket { connectto };
}

allow glusterd_t cluster_t : unix_stream_socket { connectto };


Next we tried to enable nfs-ganesha, export a glusterfs volume. Then we see that nfs-ganesha service is up and running as well as volume is exported. 

Request you to please provide the selinux rpm for RHGS-3.1, while having the workaround in place. Request you to let us know if this is going to be a new version of rpm or same version with workaround updated in it?

PS:- All these latest test has been done on VMs post fresh ISO installation.

Also, during this exercise we are seeing few more avc with denied flag, although as of now they are not hampering the service of nfs-ganesha. They are as follows:-

# ausearch -m avc -ts recent
----
time->Fri Jul 24 01:11:17 2015
type=SYSCALL msg=audit(1437680477.807:13451): arch=c000003e syscall=262 success=yes exit=0 a0=8 a1=14adb78 a2=14adae8 a3=100 items=0 ppid=7921 pid=8037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="find" exe="/usr/bin/find" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1437680477.807:13451): avc:  denied  { getattr } for  pid=8037 comm="find" path="/var/lib/libvirt/qemu/capabilities.monitor.sock" dev="dm-1" ino=1179980 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:qemu_var_run_t:s0 tclass=sock_file
----
time->Fri Jul 24 01:11:17 2015
type=SYSCALL msg=audit(1437680477.811:13452): arch=c000003e syscall=257 success=yes exit=6 a0=5 a1=14afea8 a2=30900 a3=0 items=0 ppid=7921 pid=8037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="find" exe="/usr/bin/find" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1437680477.811:13452): avc:  denied  { read } for  pid=8037 comm="find" name="sepolgen" dev="dm-1" ino=67573415 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=dir
----
time->Fri Jul 24 01:11:45 2015
type=SYSCALL msg=audit(1437680505.495:13456): arch=c000003e syscall=2 success=yes exit=4 a0=236f0f0 a1=441 a2=1b6 a3=7ffeafa43640 items=0 ppid=9220 pid=9221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mailx" exe="/usr/bin/mailx" subj=system_u:system_r:sendmail_t:s0-s0:c0.c1023 key=(null)
type=AVC msg=audit(1437680505.495:13456): avc:  denied  { create } for  pid=9221 comm="mailx" name="dead.letter" scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=file
type=AVC msg=audit(1437680505.495:13456): avc:  denied  { add_name } for  pid=9221 comm="mailx" name="dead.letter" scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=dir
type=AVC msg=audit(1437680505.495:13456): avc:  denied  { write } for  pid=9221 comm="mailx" name="Python-2015-07-24-01:11:12-7769" dev="dm-1" ino=202279365 scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=dir

Comment 7 Miroslav Grepl 2015-07-24 08:51:19 UTC
type=AVC msg=audit(1437680505.495:13456): avc:  denied  { create } for  pid=9221 comm="mailx" name="dead.letter" scontext=system_u:system_r:sendmail_t:s0-s0:c0.c1023 tcontext=system_u:object_r:abrt_var_cache_t:s0 tclass=file

is not related and this is another bug.

type=AVC msg=audit(1437680477.811:13452): avc:  denied  { read } for  pid=8037 comm="find" name="sepolgen" dev="dm-1" ino=67573415 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=dir

it looks there is a find to search something? It wants to read random directories.

Comment 11 Niels de Vos 2015-07-24 11:53:16 UTC
Using a little helper script to replace the actual find command:

  #!/bin/sh
  #
  # Tracing the find binary, why would it crawl random dirs?
  #
  
  pstree -a -A $$ > /var/tmp/find.trace.$(date +%s)
  
  exec /usr/bin/find.orig $@

This script is /usr/bin/find, and the original binary has been renamed to /usr/bin/find.orig.

When running "gluster nfs-ganesha enable":

  # cat find.trace.1437699451
  systemd --switched-root --system --deserialize 24
    `-glusterd -p /var/run/glusterd.pid
        `-sh /usr/libexec/ganesha/ganesha-ha.sh setup /etc/ganesha
            `-pcs /usr/sbin/pcs cluster setup --name G1437608257.36 nfs1 nfs2
                `-find /usr/bin/find /var/lib -name cib.* -exec rm -f {} ;
                    `-pstree -a -A -s 20942

When running "gluster nfs-ganesha disable":

  # cat find.trace.1437699337
  systemd --switched-root --system --deserialize 24
    `-glusterd -p /var/run/glusterd.pid
        `-sh /usr/libexec/ganesha/ganesha-ha.sh teardown /etc/ganesha
            `-pcs /usr/sbin/pcs cluster destroy
                `-find /usr/bin/find /var/lib -name cib.* -exec rm -f {} ;
                    `-pstree -a -A -s 20705


This shows that the "pcs" command runs "find".

  # which pcs
  /usr/sbin/pcs
  # file /usr/sbin/pcs
  /usr/sbin/pcs: symbolic link to `/usr/lib/python2.7/site-packages/pcs/pcs.py'
  # ls -Z /usr/lib/python2.7/site-packages/pcs/pcs.py
  -rwxr-xr-x. root root system_u:object_r:lib_t:s0       /usr/lib/python2.7/...

Gluster does need to run the "pcs" command in order to configure (or destroy) the pacemaker cluster.

Does this help, or are more details required?

Comment 12 Miroslav Grepl 2015-07-24 12:57:37 UTC
Yes, it's enough.

Comment 15 Apeksha 2015-07-28 07:24:13 UTC
Using selinux-policy-3.13.1-35.el7 on rhs3.1 iso for rhel7, i am not seeing these AVC's now.

Comment 18 Apeksha 2015-07-28 15:58:50 UTC
Following the exact steps mentioned in comment17 with selinux-policy-3.13.1-23.el7_1.13 build on RHGS-3.1 RHEL-7 ISO, i dont see these AVC's anymore and successfully able to setup the ganesha cluster

Comment 19 Prasanth 2015-07-28 17:11:17 UTC
Apeksha, thanks for testing it and confirming the results.

Comment 25 errata-xmlrpc 2015-11-19 10:41:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2300.html


Note You need to log in before you can comment on or make changes to this bug.