Bug 1749898
| Summary: | Unable to mount glusterfs at boot when specifying security context | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Deepu K S <dkochuka> | |
| Component: | selinux-policy | Assignee: | Zdenek Pytela <zpytela> | |
| Status: | CLOSED WONTFIX | QA Contact: | Milos Malik <mmalik> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 7.7 | CC: | cww, dkochuka, lvrabec, mmalik, ndevos, plautrba, ssekidde, vmojzis, zpytela | |
| Target Milestone: | rc | Keywords: | Reopened | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1753626 (view as bug list) | Environment: | ||
| Last Closed: | 2019-12-05 22:19:37 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1753626 | |||
|
Description
Deepu K S
2019-09-06 17:36:41 UTC
# cat my-glusterfs.te
module my-glusterfs 1.0;
require {
type fusefs_t;
type glusterd_t;
type httpd_sys_rw_content_t;
class filesystem { relabelfrom relabelto };
}
#============= glusterd_t ==============
#!!!! This avc is allowed in the current policy
allow glusterd_t fusefs_t:filesystem relabelfrom;
allow glusterd_t httpd_sys_rw_content_t:filesystem relabelto;
# semodule -i my-glusterfs.pp
# semodule -l | grep -i gluster
glusterd 1.1.2
my-glusterfs 1.0
I added this module and ran a relabel, but didn't find any success.
I see this happens only for glusterfs, other local/nfs works for the context option.
# ls -ldZ /mnt/nfs-mount
drwxr-xr-x. root root system_u:object_r:httpd_sys_rw_content_t:s0 /mnt/nfs-mount
# ls -ldZ /var/lib/pulp1/content
drwxr-xr-x. root root system_u:object_r:httpd_sys_rw_content_t:s0 /var/lib/pulp1/content
# df -HT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/rhel_vm251--241-root xfs 19G 8.6G 9.7G 47% /
example.redhat.com:/opt/nfs-export nfs4 45G 41G 4.7G 90% /mnt/nfs-mount
/dev/sdb xfs 11G 34M 11G 1% /var/lib/pulp1/content
/dev/sda1 xfs 1.1G 280M 785M 27% /boot
# mount | egrep "nfs|gluster|sdb"
/dev/sdb on /var/lib/pulp1/content type xfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,attr2,inode64,noquota,_netdev)
example.redhat.com:/opt/nfs-export on /mnt/nfs-mount type nfs4 (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.74.251.241,local_lock=none,addr=10.74.253.234,_netdev)
After running a mount -a; it gets mounted.
gluster-1:dk-dr-v3/loco on /var/lib/pulp/content type fuse.glusterfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,user_id=0,group_id=0,allow_other,max_read=131072)
May I know why this fails at boot, and works when mounted manually?
Do we need to add this to selinux-policy?
Thanks.
Niels, have you seen anything like this? (In reply to Csaba Henk from comment #3) > Niels, have you seen anything like this? No, I have not. I can only guess that this is caused by the mount process through systemd, in case it has fewer privileges than running 'mount -a' as root after booting. The full SElinux output might be helpful, as mentioned in the log: For complete SELinux messages run: sealert -l 57d1e82c-8d02-40c5-b0ce-2875219d7a88 (In reply to Niels de Vos from comment #4) > The full SElinux output might be helpful, as mentioned in the log: > > For complete SELinux messages run: sealert -l > 57d1e82c-8d02-40c5-b0ce-2875219d7a88 This is from the client node after a reboot. Sep 13 19:10:22 example setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . For complete SELinux messages run: sealert -l 4d72b595-53be-47ed-88ff-188a3fc09635 Sep 13 19:10:22 example python: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . #012#012***** Plugin catchall (100. confidence) suggests **************************#012#012 If you believe that glusterfsd should be allowed relabelfrom access on the filesystem by default.#012Then you should report this as a bug.#012 You can generate a local policy module to allow this access. #012Do#012allow this access for now by executing:#012# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs#012# semodule -i my-glusterfs.pp#012 # sealert -l 4d72b595-53be-47ed-88ff-188a3fc09635 SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . ***** Plugin catchall (100. confidence) suggests ************************** If you believe that glusterfsd should be allowed relabelfrom access on the filesystem by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs # semodule -i my-glusterfs.pp Additional Information: Source Context system_u:system_r:glusterd_t:s0 Target Context system_u:object_r:httpd_sys_rw_content_t:s0 Target Objects [ filesystem ] Source glusterfs Source Path /usr/sbin/glusterfsd Port <Unknown> Host localhost.localdomain Source RPM Packages glusterfs-fuse-3.12.2-47.4.el7.x86_64 Target RPM Packages Policy RPM selinux-policy-3.13.1-252.el7.1.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name example.redhat.com Platform Linux example.redhat.com 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 Alert Count 9 First Seen 2019-09-09 19:08:31 IST Last Seen 2019-09-14 00:40:16 IST Local ID 4d72b595-53be-47ed-88ff-188a3fc09635 Raw Audit Messages type=AVC msg=audit(1568401816.490:110): avc: denied { relabelfrom } for pid=4040 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=0 type=SYSCALL msg=audit(1568401816.490:110): arch=x86_64 syscall=mount success=no exit=EACCES a0=564fd81ae0b0 a1=564fd81ade80 a2=7f9a2344b340 a3=0 items=0 ppid=3886 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=glusterfs exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null) Hash: glusterfs,glusterd_t,httpd_sys_rw_content_t,filesystem,relabelfrom Thanks. I think it is worth reporting this bug against selinux-policy so that the change can be made there. Comment #2 is not completely clear to me, did creating, loading the policy and rebooting not work? Were the error messages the same? From my understanding, the relabeling is done implicitly when mounting. No manual relabeling should be needed. (In reply to Niels de Vos from comment #6) > I think it is worth reporting this bug against selinux-policy so that the > change can be made there. I'll move this bug to selinux-policy component. > Comment #2 is not completely clear to me, did > creating, loading the policy and rebooting not work? Were the error messages > the same? Yes. It didn't help. # ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs # semodule -i my-glusterfs.pp # semodule -l | grep -i gluster glusterd 1.1.2 my-glusterfs 1.0 The errors were the same. > > From my understanding, the relabeling is done implicitly when mounting. No > manual relabeling should be needed. This issue was not selected to be included in Red Hat Enterprise Linux 7 because it is seen either as low or moderate impact to a small number of use-cases. The next minor release will be in Maintenance Support 1 Phase, which means that qualified Critical and Important Security errata advisories (RHSAs) and Urgent Priority Bug Fix errata advisories (RHBAs) may be released as they become available. We will now close this issue, but if you believe that it qualifies for the Maintenance Support 1 Phase, please re-open; otherwise, we recommend moving the request to Red Hat Enterprise Linux 8 if applicable. I have tested this on RHEL 8 and the issue exists there too. Below were the package versions. glusterfs-3.12.2-40.2.el8.x86_64 glusterfs-fuse-3.12.2-40.2.el8.x86_64 selinux-policy-3.14.1-61.el8.noarch selinux-policy-targeted-3.14.1-61.el8.noarch On my RHEL 7, selinux-policy was at selinux-policy-3.13.1-252.el7.1.noarch selinux-policy-targeted-3.13.1-252.el7.1.noarch I'm re-opening this bug as the client machine in use is a RHEL 7.7 . A clone for RHEL 8 is also being opened. Lukas, could you please have a look if a change needs to be done in selinux-policy. Hi, Could you please put SELinux to permissive mode: # setenforce 0 THen reproduce your issue .. .. .. and attach output of: # ausearch -m AVc -ts boot Thanks, Lukas. (In reply to Lukas Vrabec from comment #11) > Hi, > > Could you please put SELinux to permissive mode: > > # setenforce 0 > > THen reproduce your issue > .. > .. > .. > > and attach output of: > > # ausearch -m AVc -ts boot > > Thanks, > Lukas. Hi Lukas, Below is the output. I had to set it to permissive in config file as issue occurs at boot. # sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: permissive Mode from config file: permissive Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 31 # ausearch -m AVc -ts boot ---- time->Mon Sep 23 16:29:15 2019 type=PROCTITLE msg=audit(1569248955.843:116): proctitle=2F7573722F7362696E2F676C75737465726673002D2D61636C002D2D667573652D6D6F756E746F7074733D636F6E746578743D222273797374656D5F753A6F626A6563745F723A68747470645F7379735F72775F636F6E74656E745F743A73302222002D2D766F6C66696C652D7365727665723D676C75737465722D31002D2D type=SYSCALL msg=audit(1569248955.843:116): arch=c000003e syscall=165 success=yes exit=0 a0=55c485102470 a1=55c485102250 a2=7ff8fe98b300 a3=0 items=0 ppid=1513 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null) type=AVC msg=audit(1569248955.843:116): avc: denied { mount } for pid=1564 comm="glusterfs" name="/" dev="fuse" ino=1 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1 type=AVC msg=audit(1569248955.843:116): avc: denied { relabelfrom } for pid=1564 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1 type=AVC msg=audit(1569248955.843:116): avc: denied { relabelto } for pid=1564 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1 type=AVC msg=audit(1569248955.843:116): avc: denied { relabelfrom } for pid=1564 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem permissive=1 Let me know if you require any additional info. Thanks. |