Bug 1822500 - [Ganesha+Selinux] "gluster nfs-ganesha disable" command error out "nfs-ganesha: failed: NFS-Ganesha service could notbe stopped"
Summary: [Ganesha+Selinux] "gluster nfs-ganesha disable" command error out "nfs-ganes...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Kaleb KEITHLEY
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-09 07:59 UTC by Manisha Saini
Modified: 2020-06-16 05:52 UTC (History)
12 users (show)

Fixed In Version: nfs-ganesha-2.7.3-13
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-16 05:52:41 UTC
Embargoed:


Attachments (Terms of Use)
selinux policy (8.18 KB, text/plain)
2020-04-14 12:34 UTC, Kaleb KEITHLEY
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2576 0 None None None 2020-06-16 05:52:48 UTC

Comment 4 Manisha Saini 2020-04-14 11:06:56 UTC
"gluster nfs-ganesha disable" command is still failing when selinux is set to ENFORCING mode with build


# rpm -qa | grep ganesha
nfs-ganesha-gluster-2.7.3-12.el8rhgs.x86_64
nfs-ganesha-debuginfo-2.7.3-12.el8rhgs.x86_64
nfs-ganesha-2.7.3-12.el8rhgs.x86_64
nfs-ganesha-selinux-2.7.3-12.el8rhgs.noarch
nfs-ganesha-debugsource-2.7.3-12.el8rhgs.x86_64
glusterfs-ganesha-6.0-32.el8rhgs.x86_64
nfs-ganesha-gluster-debuginfo-2.7.3-12.el8rhgs.x86_64



# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down the entire ganesha cluster across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: NFS-Ganesha service could notbe stopped.


# cat /var/log/audit/audit.log | grep AVC | grep ganesha
type=USER_AVC msg=audit(1586862238.871:26515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { stop } for auid=n/a uid=0 gid=0 path="/usr/lib/systemd/system/nfs-ganesha.service" cmdline="" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ganesha_unit_file_t:s0 tclass=service permissive=0  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'UID="root" AUID="unset" AUID="root" UID="root" GID="root" SAUID="root"
type=USER_AVC msg=audit(1586862238.872:26516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=n/a uid=0 gid=0 path="/usr/lib/systemd/system/nfs-ganesha.service" cmdline="" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ganesha_unit_file_t:s0 tclass=service permissive=0  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'UID="root" AUID="unset" AUID="root" UID="root" GID="root" SAUID="root"

Comment 5 Kaleb KEITHLEY 2020-04-14 12:34:23 UTC
Created attachment 1678676 [details]
selinux policy

Comment 6 Kaleb KEITHLEY 2020-04-14 12:41:16 UTC
audit2allow from that fragment says:

#============= glusterd_t ==============
allow glusterd_t ganesha_unit_file_t:service { status stop };





ganesha.te, (post applying 0019-selinux-additional-targeted-policy-for-gluster.patch) has:

...
allow glusterd_t ganesha_unit_file_t:service { start status stop };



Note: see attachment for the complete ganesha.te file.

Lukas, Zdenek, any guidance for how to address this? Why didn't this work?

(previously when "allow glusterd_t ganesha_unit_file_t:service start" was added that fixed the AVC for starting.)

Comment 7 Zdenek Pytela 2020-04-15 17:08:06 UTC
If there is the setools-console package installed, you can check existing rules with sesearch:

  # sesearch -A -s glusterd_t -t ganesha_unit_file_t -c service

Can you ensure the latest nfs-ganesha package build really contains the 0019* patch and the module is installed and active?

  # semodule -lfull | grep ganesha

Comment 8 Kaleb KEITHLEY 2020-04-15 17:38:28 UTC
(In reply to Zdenek Pytela from comment #7)
> If there is the setools-console package installed, you can check existing
> rules with sesearch:
> 
>   # sesearch -A -s glusterd_t -t ganesha_unit_file_t -c service
> 
> Can you ensure the latest nfs-ganesha package build really contains the
> 0019* patch and the module is installed and active?
> 
>   # semodule -lfull | grep ganesha

The patch _was_ applied, but the policy apparently did not actually load when the rpm was installed, apparently due to the ceph*_t types now being (erroneously?) moved to a require { ... } block. I guess it would have worked if the ceph-selinux package had been installed.

My attempts to load or reload the bad policy manually, e.g. with `semodule -i /var/lib/selinux/packages/ganesha.pp.bz2` all failed, but I could not find any indication that the policy load failed when QE did the dnf/rpm install in any of the system log files. Nor did QE give any indication that there were any errors.

After removing the ceph*_t types from the require { ... } block and rebuilding the policy I can see that a) it now loads successfully; and b) by extracting the policy to a cil file I can see the line "(allow glusterd_t ganesha_unit_file_t (service (start stop status)))", where previously it only had start.

Is there a better way to write the policy with the ceph*_t types without adding them to a require {} block and also not redundantly declaring/defining them outside the require {} block? Maybe it's okay for them to be defined outside the require {} block? AFAICT it doesn't hurt anything.

Comment 10 Manisha Saini 2020-04-20 19:56:15 UTC
Verified this BZ with 

# rpm -qa | grep ganesha
nfs-ganesha-gluster-2.7.3-13.el8rhgs.x86_64
nfs-ganesha-debuginfo-2.7.3-13.el8rhgs.x86_64
nfs-ganesha-2.7.3-13.el8rhgs.x86_64
nfs-ganesha-selinux-2.7.3-13.el8rhgs.noarch
nfs-ganesha-debugsource-2.7.3-13.el8rhgs.x86_64
glusterfs-ganesha-6.0-32.el8rhgs.x86_64
nfs-ganesha-gluster-debuginfo-2.7.3-13.el8rhgs.x86_64


--------

# getenforce
Enforcing




# pcs status
Cluster name: ganesha-ha-360
Cluster Summary:
  * Stack: corosync
  * Current DC: dhcp35-63.lab.eng.blr.redhat.com (version 2.0.3-5.el8-4b1f869f0f) - partition with quorum
  * Last updated: Mon Apr 20 15:50:53 2020
  * Last change:  Mon Apr 20 15:40:58 2020 by root via cibadmin on dhcp35-76.lab.eng.blr.redhat.com
  * 4 nodes configured
  * 24 resource instances configured

Node List:
  * Online: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ]

Full List of Resources:
  * Clone Set: nfs_setup-clone [nfs_setup]:
    * Started: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ]
  * Clone Set: nfs-mon-clone [nfs-mon]:
    * Started: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ]
  * Clone Set: nfs-grace-clone [nfs-grace]:
    * Started: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ]
  * Resource Group: dhcp35-76.lab.eng.blr.redhat.com-group:
    * dhcp35-76.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp35-76.lab.eng.blr.redhat.com
    * dhcp35-76.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp35-76.lab.eng.blr.redhat.com
    * dhcp35-76.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp35-76.lab.eng.blr.redhat.com
  * Resource Group: dhcp35-21.lab.eng.blr.redhat.com-group:
    * dhcp35-21.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp35-21.lab.eng.blr.redhat.com
    * dhcp35-21.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp35-21.lab.eng.blr.redhat.com
    * dhcp35-21.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp35-21.lab.eng.blr.redhat.com
  * Resource Group: dhcp35-63.lab.eng.blr.redhat.com-group:
    * dhcp35-63.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp35-63.lab.eng.blr.redhat.com
    * dhcp35-63.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp35-63.lab.eng.blr.redhat.com
    * dhcp35-63.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp35-63.lab.eng.blr.redhat.com
  * Resource Group: dhcp35-134.lab.eng.blr.redhat.com-group:
    * dhcp35-134.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp35-134.lab.eng.blr.redhat.com
    * dhcp35-134.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp35-134.lab.eng.blr.redhat.com
    * dhcp35-134.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp35-134.lab.eng.blr.redhat.com

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled




# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down the entire ganesha cluster across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success 





# cat /var/log/audit/audit.log | grep AVC | grep ganesha




# pcs status
Error: error running crm_mon, is pacemaker running?
  Error: cluster is not available on this node


"Gluster nfs-ganesha disable" command works as expected. Moving this BZ to verified state

Comment 12 errata-xmlrpc 2020-06-16 05:52:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2576


Note You need to log in before you can comment on or make changes to this bug.