Bug 1466144 - [GANESHA] Ganesha setup creation fails due to selinux blocking some services required for setup creation
Summary: [GANESHA] Ganesha setup creation fails due to selinux blocking some services ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
urgent
unspecified
Target Milestone: ---
: RHGS 3.3.0
Assignee: Kaleb KEITHLEY
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On: 1435697 1466790 1469027 1471917
Blocks: 1417151 1461098 1466343
TreeView+ depends on / blocked
 
Reported: 2017-06-29 07:43 UTC by Manisha Saini
Modified: 2017-09-21 05:02 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.8.4-38
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1466343 1466790 (view as bug list)
Environment:
Last Closed: 2017-09-21 05:02:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Manisha Saini 2017-06-29 07:43:01 UTC
Description of problem:
Ganesha setup creation fails due to selinux blocking some services required for ganesha setup creation


Version-Release number of selected component (if applicable):

# rpm -qa | grep ganesha
nfs-ganesha-debuginfo-2.4.4-10.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-31.el7rhgs.x86_64
nfs-ganesha-2.4.4-10.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.4-10.el7rhgs.x86_64

#selinux-policy-3.13.1-164.el7.noarch



How reproducible:
Consistently

Steps to Reproduce:
1.Create a 4 node ganesha setup 


Actual results:
Setup creation fails with following AVC's-

type=PROCTITLE msg=audit(06/29/2017 13:03:33.671:3659) : proctitle=/usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens 
type=SYSCALL msg=audit(06/29/2017 13:03:33.671:3659) : arch=x86_64 syscall=mprotect success=no exit=EACCES(Permission denied) a0=0x7f0990847000 a1=0x1000 a2=PROT_READ|PROT_EXEC a3=0x7ffdbee31c60 items=0 ppid=16841 pid=17109 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=ruby exe=/usr/bin/ruby subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(06/29/2017 13:03:33.671:3659) : avc:  denied  { execmem } for  pid=17109 comm=ruby scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=process 


Expected results:
Setup creation should succeed 

Additional info:

Comment 2 Manisha Saini 2017-06-29 07:46:46 UTC
Verification of the BZ[1] is blocked due to this issue

[1]https://bugzilla.redhat.com/show_bug.cgi?id=1461098

Comment 4 Soumya Koduri 2017-06-29 11:32:50 UTC
> Actual results:
> Setup creation fails with following AVC's-
> 
> type=PROCTITLE msg=audit(06/29/2017 13:03:33.671:3659) :
> proctitle=/usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb
> read_tokens 
> type=SYSCALL msg=audit(06/29/2017 13:03:33.671:3659) : arch=x86_64
> syscall=mprotect success=no exit=EACCES(Permission denied) a0=0x7f0990847000
> a1=0x1000 a2=PROT_READ|PROT_EXEC a3=0x7ffdbee31c60 items=0 ppid=16841
> pid=17109 auid=unset uid=root gid=root euid=root suid=root fsuid=root
> egid=root sgid=root fsgid=root tty=(none) ses=unset comm=ruby
> exe=/usr/bin/ruby subj=system_u:system_r:glusterd_t:s0 key=(null) 
> type=AVC msg=audit(06/29/2017 13:03:33.671:3659) : avc:  denied  { execmem }
> for  pid=17109 comm=ruby scontext=system_u:system_r:glusterd_t:s0
> tcontext=system_u:system_r:glusterd_t:s0 tclass=process 
> 

+Lukas

@Lukas,

Could you please take a look and provide your comments? Thanks!

Comment 8 Soumya Koduri 2017-06-30 12:48:40 UTC
Manisha and myself have tried manually executing HA script instead of using glusterd CLI, the cluster creation was successful even though there was above AVC reported in audit.log. That got us to think that probably the issue is not with selinux denial but with something else.

In /var/log/messages, we see messages of pcs crash due to PythonException which seem to be happening only with glusterd CLI and with SELinux in enforcing mode. Not sure how they are related.

Manisha, could you please paste the latest logs and sosreport.

Same errors was logged in the earlier sosreport provided too -

Jun 29 13:03:33 localhost dbus-daemon: dbus[936]: [system] Successfully activated service 'org.freedesktop.problems'
Jun 29 13:03:34 localhost python: detected unhandled Python exception in '/usr/sbin/pcs'
Jun 29 13:03:34 localhost abrt-server: Not saving repeating crash in '/usr/sbin/pcs'
Jun 29 13:03:34 localhost logger: pcs cluster setup --name ganesha-ha-360 dhcp42-125.lab.eng.blr.redhat.com dhcp42-127.lab.eng.blr.redhat.com dhcp42-129.lab.eng.blr.redhat.com dhcp42-119.lab.eng.blr.redhat.com failed

Comment 9 Manisha Saini 2017-06-30 13:06:22 UTC
crash-


external.py:436:run:UnicodeDecodeError: 'ascii' codec can't decode byte 0xe0 in position 85: ordinal not in range(128)

Traceback (most recent call last):
  File "/usr/sbin/pcs", line 9, in <module>
    load_entry_point('pcs==0.9.157', 'console_scripts', 'pcs')()
  File "/usr/lib/python2.7/site-packages/pcs/app.py", line 191, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/site-packages/pcs/cluster.py", line 93, in cluster_cmd
    cluster_auth(argv)
  File "/usr/lib/python2.7/site-packages/pcs/cluster.py", line 196, in cluster_auth
    auth_nodes(argv)
  File "/usr/lib/python2.7/site-packages/pcs/cluster.py", line 229, in auth_nodes
    status = utils.checkAuthorization(node)
  File "/usr/lib/python2.7/site-packages/pcs/utils.py", line 151, in checkAuthorization
    return sendHTTPRequest(node, 'remote/check_auth', None, False, False)
  File "/usr/lib/python2.7/site-packages/pcs/utils.py", line 435, in sendHTTPRequest
    cookies = __get_cookie_list(host, readTokens())
  File "/usr/lib/python2.7/site-packages/pcs/utils.py", line 234, in readTokens
    output, retval = run_pcsdcli("read_tokens")
  File "/usr/lib/python2.7/site-packages/pcs/utils.py", line 1049, in run_pcsdcli
    env_var
  File "/usr/lib/python2.7/site-packages/pcs/lib/external.py", line 436, in run
    out_err=out_err
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe0 in position 85: ordinal not in range(128)

Local variables in innermost frame:
binary_output: False
stdin_string: '{}'
out_std: ''
val: '1'
process: <subprocess.Popen object at 0x2861550>

Comment 18 Chris Feist 2017-07-14 17:16:36 UTC
I'm not quite sure what the question is that you're looking for comment on.

But just to re-cap, running pcs from the command line without ganesha will work without any issues.  But if you do run pcs from ganesha you will need to enable the selinux boolean otherwise pcs/pcsd will not be able to access the files that it needs.  There's no fix that I see that is necessary on the pcs side, I believe it's an issue with the ganesha/selinux interaction.

Comment 19 Soumya Koduri 2017-07-17 13:16:12 UTC
Thanks Chris! Except that this pcs command ('pcs cluster setup --name ganesha-ha-360') which you are referring to is not run from ganesha but from shell script invoked by glusterd process. 

@Atin, 
any comments?

Comment 23 Kaleb KEITHLEY 2017-07-17 16:29:31 UTC
https://review.gluster.org/17806

Comment 24 Kaleb KEITHLEY 2017-07-19 11:54:32 UTC
we can:

1) set/unset gluster_use_execmem on subpackage (glusterfs-ganesha) install/uninstall.

2) set/unset gluster_use_execmem at the beginning/end of the cluster setup/teardown script.

3) get a fix for pcs from pacemaker so that gluster_use_execmem isn't required. (Can we get a fix in time for RHEL7.4 and RHGS-3.3?)

Comment 25 Kaleb KEITHLEY 2017-07-19 11:57:00 UTC
4) use 1) or 2) above until 3) is available.

Comment 26 Kaleb KEITHLEY 2017-07-19 12:53:13 UTC
5) Document that admin must set/unset gluster_use_execmem before/after setup/teardown of ganesha

Comment 27 Chris Feist 2017-07-19 14:28:04 UTC
(In reply to Kaleb KEITHLEY from comment #24)
> we can:
> 
> 1) set/unset gluster_use_execmem on subpackage (glusterfs-ganesha)
> install/uninstall.
> 
> 2) set/unset gluster_use_execmem at the beginning/end of the cluster
> setup/teardown script.
> 
> 3) get a fix for pcs from pacemaker so that gluster_use_execmem isn't
> required. (Can we get a fix in time for RHEL7.4 and RHGS-3.3?)

#3 is not going to fix your problem.  The issue is not pcs, the issue is that SELinux does not let pcs access a file it needs to access to work.  If the SELinux boolean is set everything is fine, if it isn't pcs won't work no matter what fixes we put in.

Comment 28 Kaleb KEITHLEY 2017-07-19 18:25:23 UTC
So now it's:

1) set/unset gluster_use_execmem on subpackage (glusterfs-ganesha) install/uninstall.

2) set/unset gluster_use_execmem at the beginning/end of the cluster setup/teardown script.

3) Document that admin must set/unset gluster_use_execmem before/after setup/teardown of ganesha

Which should it be?

Comment 34 Manisha Saini 2017-08-03 11:35:04 UTC
Tested with 

# rpm -qa | grep ganesha
nfs-ganesha-2.4.4-16.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.4-16.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-37.el7rhgs.x86_64



Steps:
1.Check for ganesha_use_fusefs  before setting ganesha cluster.

# semanage boolean -l | grep ganesha
ganesha_use_fusefs             (on   ,   on) 


2.Create 4 node ganesha cluster via gdeploy.

Result-

Ganesha cluster fails to come up when selinux is in Enforcing mode

According to fix,gluster_use_execmem    boolean should be automatically set to ON while setup ganesha cluster and after setup is done,It should be disabled and is handled in ganesha-ha.sh script.

But while setting up ganesha cluster,this boolean fails to turn to enable while setting up ganesha cluster

# semanage boolean -l | grep gluster
gluster_anon_write             (off  ,  off)  Allow gluster to anon write
gluster_export_all_ro          (off  ,  off)  Allow gluster to export all ro
gluster_use_execmem            (off  ,  off)  Allow gluster to use execmem
gluster_export_all_rw          (on   ,   on)  Allow gluster to export all rw
virt_use_glusterd              (off  ,  off)  Allow virt to use glusterd


audit.log-

#  ausearch -m avc -m user_avc -m selinux_err -i -ts recent 
----
type=USER_AVC msg=audit(08/03/2017 16:54:01.219:1291966) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=26)  exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=USER_AVC msg=audit(08/03/2017 16:54:01.219:1291967) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=27)  exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=PROCTITLE msg=audit(08/03/2017 16:56:20.521:1293921) : proctitle=/usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens 
type=SYSCALL msg=audit(08/03/2017 16:56:20.521:1293921) : arch=x86_64 syscall=mprotect success=no exit=EACCES(Permission denied) a0=0x7f93fc33d000 a1=0x1000 a2=PROT_READ|PROT_EXEC a3=0x7fff625aeeb0 items=0 ppid=23597 pid=23598 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=ruby exe=/usr/bin/ruby subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(08/03/2017 16:56:20.521:1293921) : avc:  denied  { execmem } for  pid=23598 comm=ruby scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=process 
----
type=PROCTITLE msg=audit(08/03/2017 16:56:21.143:1293922) : proctitle=/usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens 
type=SYSCALL msg=audit(08/03/2017 16:56:21.143:1293922) : arch=x86_64 syscall=mprotect success=no exit=EACCES(Permission denied) a0=0x7f64801e9000 a1=0x1000 a2=PROT_READ|PROT_EXEC a3=0x7ffc8cfa7660 items=0 ppid=23886 pid=23888 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=ruby exe=/usr/bin/ruby subj=system_u:system_r:glusterd_t:s0 key=(null) 
type=AVC msg=audit(08/03/2017 16:56:21.143:1293922) : avc:  denied  { execmem } for  pid=23888 comm=ruby scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=process

Comment 38 Manisha Saini 2017-08-06 10:19:25 UTC
Verified this bug on 

# rpm -qa | grep ganesha
glusterfs-ganesha-3.8.4-38.el7rhgs.x86_64
nfs-ganesha-2.4.4-16.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.4-16.el7rhgs.x86_64


While selinux is in ENFORCING mode,gluster_use_execmem  booleans turns ON at the time of setting up ganesha cluster and is disabled post setup creation.

Moving this bug to verified state.

Comment 40 errata-xmlrpc 2017-09-21 05:02:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.