Bug 1466790
Summary: | [GANESHA] Ganesha setup creation fails due to selinux blocking some services required for setup creation | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Manisha Saini <msaini> | |
Component: | pcs | Assignee: | Tomas Jelinek <tojeline> | |
Status: | CLOSED DUPLICATE | QA Contact: | cluster-qe <cluster-qe> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | urgent | |||
Version: | 7.4 | CC: | cfeist, cluster-maint, idevat, jpokorny, jthottan, kkeithle, lvrabec, msaini, omular, ovasik, rcyriac, rhs-bugs, sisharma, skoduri, storage-qa-internal, tojeline | |
Target Milestone: | rc | Keywords: | Regression, SELinux | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | 1466144 | |||
: | 1469027 (view as bug list) | Environment: | ||
Last Closed: | 2017-07-21 11:23:24 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1461098, 1466144, 1466343 |
Comment 3
Jan Pokorný [poki]
2017-06-30 14:16:21 UTC
Jan, This issue is with Rhel7 with default nfs-ganesha configuration. I didn't modify anything while setting up ganesha cluster # cat ganesha-ha.conf.sample # Name of the HA cluster created. # must be unique within the subnet and 15 characters or less in length HA_NAME="ganesha-ha-360" re [comment 4]: Manisha, colour me sorry for jumping to conclusion prematurely (in fact, I mistakenly [Friday syndrome perhaps] related that to the other report I've observed just the previous day -- and that was exactly what the sort I described). Now, if you still have that machine untouched around, can you please run the following (which is what made "pcs cluster setup" choke)? echo '{}' | GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby \ /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens What's especially of interest, whether there will be any such character I mentioned in [comment 3]. As a next step, I would incorporate that command to /usr/libexec/ganesha/ganesha-ha.sh file, directly before "pcs cluster setup" invocation, using the similar error logging for good measure -- the intention here would be to exercise the behaviour when running under respective SELinux context (system_u:system_r:glusterd_t:s0 ?). Note that there is no dedicated SELinux label for /usr/sbin/pcs (nor /usr/lib/pcsd/pcsd-cli.rb for that matter), which may be the source of issues when the calling parent is _not_ unconfined (such as when you run pcs directly on command-line). Please, report back with the findings. (In reply to Jan Pokorný from comment #6) > re [comment 4]: > > Manisha, > colour me sorry for jumping to conclusion prematurely (in fact, > I mistakenly [Friday syndrome perhaps] related that to the other report > I've observed just the previous day -- and that was exactly what the > sort I described). > > Now, if you still have that machine untouched around, can you please > run the following (which is what made "pcs cluster setup" choke)? > > echo '{}' | GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby \ > /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens > > What's especially of interest, whether there will be any such > character I mentioned in [comment 3]. > > > As a next step, I would incorporate that command to > /usr/libexec/ganesha/ganesha-ha.sh file, directly before > "pcs cluster setup" invocation, using the similar error logging > for good measure -- the intention here would be to exercise > the behaviour when running under respective SELinux context > (system_u:system_r:glusterd_t:s0 ?). > > Note that there is no dedicated SELinux label for /usr/sbin/pcs > (nor /usr/lib/pcsd/pcsd-cli.rb for that matter), which may be the > source of issues when the calling parent is _not_ unconfined > (such as when you run pcs directly on command-line). > > > Please, report back with the findings. Output on the nods failing - # echo '{}' | GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby \/usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens { "status": "ok", "data": { "dhcp42-107.lab.eng.blr.redhat.com": "f47e7bec-42da-4957-96c9-7a7d430b63b0", "dhcp42-114.lab.eng.blr.redhat.com": "d9f38d6d-381e-4dcb-a9c8-7eb27762303f", "dhcp42-117.lab.eng.blr.redhat.com": "5e52ea87-7bb8-40aa-9895-3e3fe505ecc2", "dhcp42-119.lab.eng.blr.redhat.com": "ba2605a6-8dc4-41a9-868a-99c17ca53c4a", "dhcp42-125.lab.eng.blr.redhat.com": "fbcc1e7c-4e8c-418f-bb83-6b3e34223bb3", "dhcp42-127.lab.eng.blr.redhat.com": "e0f5bde0-5bdc-4422-8779-ddcdcee76b10", "dhcp42-129.lab.eng.blr.redhat.com": "1c3e334c-3846-4a93-8d3a-d41f0c3a7b32", "dhcp42-88.lab.eng.blr.redhat.com": "b817fc56-a149-4abf-8d38-f0d9e90c72ff" }, "log": [ "I, [2017-07-04T14:58:02.919709 #19749] INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name\n", "I, [2017-07-04T14:58:02.919789 #19749] INFO -- : CIB USER: hacluster, groups: \n", "I, [2017-07-04T14:58:02.922303 #19749] INFO -- : Return Value: 1\n", "W, [2017-07-04T14:58:02.922367 #19749] WARN -- : Cannot read config 'corosync.conf' from '/etc/corosync/corosync.conf': No such file\n", "W, [2017-07-04T14:58:02.922412 #19749] WARN -- : Cannot read config 'corosync.conf' from '/etc/corosync/corosync.conf': No such file or directory - /etc/corosync/corosync.conf\n" ] } Thanks, Manisha. There doesn't seem to be any non-ASCII character in the output. But it was also run as unconfined user (root) at that point, if I understand it correctly. And unless you changed the environment since then in any way. So it's now even more convincing that when the same command is run within some confined domain (directly or via pcs), ruby (or libselinux*?) will produce non-ASCII character on either std{our,err}, which then panics the pcs. Manisha, could you follow also "As a next step" part of [comment 6]? Perhaps it could be even reduced to: echo '{}' | GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby \ runcon system_u:system_r:glusterd_t:s0 \ /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens \ >pcsd.out 2>pcsd.err (In reply to Jan Pokorný from comment #10) > Thanks, Manisha. > There doesn't seem to be any non-ASCII character in the output. > But it was also run as unconfined user (root) at that point, if > I understand it correctly. And unless you changed the environment > since then in any way. > > So it's now even more convincing that when the same command is > run within some confined domain (directly or via pcs), ruby > (or libselinux*?) will produce non-ASCII character on either > std{our,err}, which then panics the pcs. > > Manisha, could you follow also "As a next step" part of [comment 6]? > Perhaps it could be even reduced to: > > echo '{}' | GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby \ > runcon system_u:system_r:glusterd_t:s0 \ > /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens \ > >pcsd.out 2>pcsd.err After adding this to /usr/libexec/ganesha/ganesha-ha.sh =========================== pcs cluster auth ${servers} # pcs cluster setup --name ${name} ${servers} echo '{}' | GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby \runcon system_u:system_r:glusterd_t:s0 \/usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens \>pcsd.out 2>pcsd.err pcs cluster setup ${RHEL6_PCS_CNAME_OPTION} ${name} --transport udpu ${servers} if [ $? -ne 0 ]; then logger "pcs cluster setup ${RHEL6_PCS_CNAME_OPTION} ${name} ${servers} failed" exit 1; =================== # cat pcsd.err runcon: invalid context: system_u:system_r:glusterd_t:s0: Permission denied The issue here appears to be a problem with SELinux not allowing access to /var/lib/pcsd and/or /var/lib/pcsd/tokens Even if the bug in pcsd is fixed with the ascii error, the SELinux issue will need to be resolved for pcsd to work properly. Lukas , Can you please provide your input here for comment #12 Manisha, Could you run the scenario like in comment#11 and then attach output of: # ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today Thanks, Lukas. (In reply to Lukas Vrabec from comment #14) > Manisha, > > Could you run the scenario like in comment#11 and then attach output of: > # ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today > > Thanks, > Lukas. # ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today ---- time->Mon Jul 10 04:32:26 2017 type=USER_AVC msg=audit(1499675546.179:13630): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc: received setenforce notice (enforcing=1) exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?' ---- time->Mon Jul 10 04:32:28 2017 type=PROCTITLE msg=audit(1499675548.663:13634): proctitle=2F7573722F62696E2F72756279002D492F7573722F6C69622F706373642F002F7573722F6C69622F706373642F706373642D636C692E726200726561645F746F6B656E73 type=SYSCALL msg=audit(1499675548.663:13634): arch=c000003e syscall=10 success=no exit=-13 a0=7f99e0d1e000 a1=1000 a2=5 a3=7ffe69ba6ba0 items=0 ppid=8857 pid=8858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ruby" exe="/usr/bin/ruby" subj=system_u:system_r:glusterd_t:s0 key=(null) type=AVC msg=audit(1499675548.663:13634): avc: denied { execmem } for pid=8858 comm="ruby" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=process ---- time->Mon Jul 10 04:33:18 2017 type=PROCTITLE msg=audit(1499675598.189:13664): proctitle=2F7573722F62696E2F72756279002D492F7573722F6C69622F706373642F002F7573722F6C69622F706373642F706373642D636C692E726200726561645F746F6B656E73 type=SYSCALL msg=audit(1499675598.189:13664): arch=c000003e syscall=10 success=no exit=-13 a0=7fa90bd92000 a1=1000 a2=5 a3=7ffffaef82b0 items=0 ppid=9855 pid=10012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ruby" exe="/usr/bin/ruby" subj=system_u:system_r:glusterd_t:s0 key=(null) type=AVC msg=audit(1499675598.189:13664): avc: denied { execmem } for pid=10012 comm="ruby" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=process Thanks, And please correct me, is this access needed in default glusterd configuration or just in some "configuration" ? Reason why I'm asking is if it's enough to create boolean e.g. "gluster_execmem" and this boolean will be switched on just in some configuration or this should be allowed for gluster by default. Thanks. Lukas. |