Bug 1822500
Summary: | [Ganesha+Selinux] "gluster nfs-ganesha disable" command error out "nfs-ganesha: failed: NFS-Ganesha service could notbe stopped" | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Manisha Saini <msaini> | ||||
Component: | nfs-ganesha | Assignee: | Kaleb KEITHLEY <kkeithle> | ||||
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.5 | CC: | dang, grajoria, jthottan, kkeithle, lvrabec, mbenjamin, pasik, pprakash, rhs-bugs, skoduri, storage-qa-internal, zpytela | ||||
Target Milestone: | --- | Keywords: | ZStream | ||||
Target Release: | RHGS 3.5.z Batch Update 2 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | nfs-ganesha-2.7.3-13 | Doc Type: | No Doc Update | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2020-06-16 05:52:41 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Comment 4
Manisha Saini
2020-04-14 11:06:56 UTC
Created attachment 1678676 [details]
selinux policy
audit2allow from that fragment says: #============= glusterd_t ============== allow glusterd_t ganesha_unit_file_t:service { status stop }; ganesha.te, (post applying 0019-selinux-additional-targeted-policy-for-gluster.patch) has: ... allow glusterd_t ganesha_unit_file_t:service { start status stop }; Note: see attachment for the complete ganesha.te file. Lukas, Zdenek, any guidance for how to address this? Why didn't this work? (previously when "allow glusterd_t ganesha_unit_file_t:service start" was added that fixed the AVC for starting.) If there is the setools-console package installed, you can check existing rules with sesearch: # sesearch -A -s glusterd_t -t ganesha_unit_file_t -c service Can you ensure the latest nfs-ganesha package build really contains the 0019* patch and the module is installed and active? # semodule -lfull | grep ganesha (In reply to Zdenek Pytela from comment #7) > If there is the setools-console package installed, you can check existing > rules with sesearch: > > # sesearch -A -s glusterd_t -t ganesha_unit_file_t -c service > > Can you ensure the latest nfs-ganesha package build really contains the > 0019* patch and the module is installed and active? > > # semodule -lfull | grep ganesha The patch _was_ applied, but the policy apparently did not actually load when the rpm was installed, apparently due to the ceph*_t types now being (erroneously?) moved to a require { ... } block. I guess it would have worked if the ceph-selinux package had been installed. My attempts to load or reload the bad policy manually, e.g. with `semodule -i /var/lib/selinux/packages/ganesha.pp.bz2` all failed, but I could not find any indication that the policy load failed when QE did the dnf/rpm install in any of the system log files. Nor did QE give any indication that there were any errors. After removing the ceph*_t types from the require { ... } block and rebuilding the policy I can see that a) it now loads successfully; and b) by extracting the policy to a cil file I can see the line "(allow glusterd_t ganesha_unit_file_t (service (start stop status)))", where previously it only had start. Is there a better way to write the policy with the ceph*_t types without adding them to a require {} block and also not redundantly declaring/defining them outside the require {} block? Maybe it's okay for them to be defined outside the require {} block? AFAICT it doesn't hurt anything. Verified this BZ with # rpm -qa | grep ganesha nfs-ganesha-gluster-2.7.3-13.el8rhgs.x86_64 nfs-ganesha-debuginfo-2.7.3-13.el8rhgs.x86_64 nfs-ganesha-2.7.3-13.el8rhgs.x86_64 nfs-ganesha-selinux-2.7.3-13.el8rhgs.noarch nfs-ganesha-debugsource-2.7.3-13.el8rhgs.x86_64 glusterfs-ganesha-6.0-32.el8rhgs.x86_64 nfs-ganesha-gluster-debuginfo-2.7.3-13.el8rhgs.x86_64 -------- # getenforce Enforcing # pcs status Cluster name: ganesha-ha-360 Cluster Summary: * Stack: corosync * Current DC: dhcp35-63.lab.eng.blr.redhat.com (version 2.0.3-5.el8-4b1f869f0f) - partition with quorum * Last updated: Mon Apr 20 15:50:53 2020 * Last change: Mon Apr 20 15:40:58 2020 by root via cibadmin on dhcp35-76.lab.eng.blr.redhat.com * 4 nodes configured * 24 resource instances configured Node List: * Online: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ] Full List of Resources: * Clone Set: nfs_setup-clone [nfs_setup]: * Started: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ] * Clone Set: nfs-mon-clone [nfs-mon]: * Started: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ] * Clone Set: nfs-grace-clone [nfs-grace]: * Started: [ dhcp35-21.lab.eng.blr.redhat.com dhcp35-63.lab.eng.blr.redhat.com dhcp35-76.lab.eng.blr.redhat.com dhcp35-134.lab.eng.blr.redhat.com ] * Resource Group: dhcp35-76.lab.eng.blr.redhat.com-group: * dhcp35-76.lab.eng.blr.redhat.com-nfs_block (ocf::heartbeat:portblock): Started dhcp35-76.lab.eng.blr.redhat.com * dhcp35-76.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp35-76.lab.eng.blr.redhat.com * dhcp35-76.lab.eng.blr.redhat.com-nfs_unblock (ocf::heartbeat:portblock): Started dhcp35-76.lab.eng.blr.redhat.com * Resource Group: dhcp35-21.lab.eng.blr.redhat.com-group: * dhcp35-21.lab.eng.blr.redhat.com-nfs_block (ocf::heartbeat:portblock): Started dhcp35-21.lab.eng.blr.redhat.com * dhcp35-21.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp35-21.lab.eng.blr.redhat.com * dhcp35-21.lab.eng.blr.redhat.com-nfs_unblock (ocf::heartbeat:portblock): Started dhcp35-21.lab.eng.blr.redhat.com * Resource Group: dhcp35-63.lab.eng.blr.redhat.com-group: * dhcp35-63.lab.eng.blr.redhat.com-nfs_block (ocf::heartbeat:portblock): Started dhcp35-63.lab.eng.blr.redhat.com * dhcp35-63.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp35-63.lab.eng.blr.redhat.com * dhcp35-63.lab.eng.blr.redhat.com-nfs_unblock (ocf::heartbeat:portblock): Started dhcp35-63.lab.eng.blr.redhat.com * Resource Group: dhcp35-134.lab.eng.blr.redhat.com-group: * dhcp35-134.lab.eng.blr.redhat.com-nfs_block (ocf::heartbeat:portblock): Started dhcp35-134.lab.eng.blr.redhat.com * dhcp35-134.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp35-134.lab.eng.blr.redhat.com * dhcp35-134.lab.eng.blr.redhat.com-nfs_unblock (ocf::heartbeat:portblock): Started dhcp35-134.lab.eng.blr.redhat.com Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled # gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down the entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success # cat /var/log/audit/audit.log | grep AVC | grep ganesha # pcs status Error: error running crm_mon, is pacemaker running? Error: cluster is not available on this node "Gluster nfs-ganesha disable" command works as expected. Moving this BZ to verified state Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2576 |