Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1749898

Summary: Unable to mount glusterfs at boot when specifying security context
Product: Red Hat Enterprise Linux 7 Reporter: Deepu K S <dkochuka>
Component: selinux-policyAssignee: Zdenek Pytela <zpytela>
Status: CLOSED WONTFIX QA Contact: Milos Malik <mmalik>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.7CC: cww, dkochuka, lvrabec, mmalik, ndevos, plautrba, ssekidde, vmojzis, zpytela
Target Milestone: rcKeywords: Reopened
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1753626 (view as bug list) Environment:
Last Closed: 2019-12-05 22:19:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1753626    

Description Deepu K S 2019-09-06 17:36:41 UTC
Description of problem:
Gluster client mount fails at boot, if SElinux context is mentioned in mount options.

# cat /etc/fstab | grep -i gluster
gluster-1:/dist-rep-vol3/loco        /var/lib/pulp/content/   glusterfs       defaults,acl,_netdev,x-systemd.automount,x-systemd.device-timeout=10,context="system_u:object_r:httpd_sys_rw_content_t:s0" 0 0

The mount works after login, or if context is not specified.

Version-Release number of selected component (if applicable):
glusterfs-3.12.2-47.4.el7.x86_64
glusterfs-client-xlators-3.12.2-47.4.el7.x86_64
glusterfs-fuse-3.12.2-47.4.el7.x86_64
glusterfs-libs-3.12.2-47.4.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Add a boot persistent gluster volume mount entry in /etc/fstab with an SELinux context.
gluster-1:/dist-rep-vol3/loco        /var/lib/pulp/content/   glusterfs       defaults,acl,_netdev,x-systemd.automount,x-systemd.device-timeout=10,context="system_u:object_r:httpd_sys_rw_content_t:s0" 0 0
2. Reboot the client machine.
3.

Actual results:
Client mount logs :
[2019-09-06 20:22:58.074578] I [MSGID: 100030] [glusterfsd.c:2646:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.2 (args: /usr/sbin/glusterfs --acl --fuse-mountopts=context=""system_u:object_r:httpd_sys_rw_content_t:s0"" --volfile-server=gluster-1 --volfile-id=dist-rep-vol3 --fuse-mountopts=context=""system_u:object_r:httpd_sys_rw_content_t:s0"" --subdir-mount=/loco /var/lib/pulp/content)
[2019-09-06 20:22:58.220974] E [mount.c:444:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2019-09-06 20:22:58.221108] I [mount.c:489:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Permission denied) errno 13
[2019-09-06 20:22:58.221120] E [mount.c:502:gf_fuse_mount] 0-glusterfs-fuse: mount of gluster-1:dist-rep-vol3/loco to /var/lib/pulp/content (context=""system_u:object_r:httpd_sys_rw_content_t:s0"",allow_other,max_read=131072) failed
[2019-09-06 20:22:58.245740] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2019-09-06 20:22:58.315131] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2019-09-06 20:22:58.316241] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1


/var/log/messages :
Sep  7 01:53:01 vm251-241 setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . For complete SELinux messages run: sealert -l 57d1e82c-8d02-40c5-b0ce-2875219d7a88
Sep  7 01:53:01 vm251-241 python: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem .#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#
012If you believe that glusterfsd should be allowed relabelfrom access on the  filesystem by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012D
o#012allow this access for now by executing:#012# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs#012# semodule -i my-glusterfs.pp#012


Expected results:
# mount | grep -i gluster
gluster-1:dist-rep-vol3/loco on /var/lib/pulp/content type fuse.glusterfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,user_id=0,group_id=0,allow_other,max_read=131072)

Additional info:
An older bug was found - https://bugzilla.redhat.com/show_bug.cgi?id=1257234

Comment 2 Deepu K S 2019-09-09 13:16:55 UTC
# cat my-glusterfs.te 

module my-glusterfs 1.0;

require {
	type fusefs_t;
	type glusterd_t;
	type httpd_sys_rw_content_t;
	class filesystem { relabelfrom relabelto };
}

#============= glusterd_t ==============

#!!!! This avc is allowed in the current policy
allow glusterd_t fusefs_t:filesystem relabelfrom;
allow glusterd_t httpd_sys_rw_content_t:filesystem relabelto;

# semodule -i my-glusterfs.pp

# semodule -l | grep -i gluster
glusterd	1.1.2
my-glusterfs	1.0

I added this module and ran a relabel, but didn't find any success.

I see this happens only for glusterfs, other local/nfs works for the context option.

# ls -ldZ /mnt/nfs-mount
drwxr-xr-x. root root system_u:object_r:httpd_sys_rw_content_t:s0 /mnt/nfs-mount
# ls -ldZ /var/lib/pulp1/content
drwxr-xr-x. root root system_u:object_r:httpd_sys_rw_content_t:s0 /var/lib/pulp1/content

# df -HT
Filesystem                                       Type      Size  Used Avail Use% Mounted on
/dev/mapper/rhel_vm251--241-root                 xfs        19G  8.6G  9.7G  47% /
example.redhat.com:/opt/nfs-export nfs4       45G   41G  4.7G  90% /mnt/nfs-mount
/dev/sdb                                         xfs        11G   34M   11G   1% /var/lib/pulp1/content
/dev/sda1                                        xfs       1.1G  280M  785M  27% /boot

# mount | egrep "nfs|gluster|sdb"
/dev/sdb on /var/lib/pulp1/content type xfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,attr2,inode64,noquota,_netdev)
example.redhat.com:/opt/nfs-export on /mnt/nfs-mount type nfs4 (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.74.251.241,local_lock=none,addr=10.74.253.234,_netdev)

After running a mount -a; it gets mounted.
gluster-1:dk-dr-v3/loco on /var/lib/pulp/content type fuse.glusterfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,user_id=0,group_id=0,allow_other,max_read=131072)

May I know why this fails at boot, and works when mounted manually?
Do we need to add this to selinux-policy?

Thanks.

Comment 3 Csaba Henk 2019-09-10 18:05:54 UTC
Niels, have you seen anything like this?

Comment 4 Niels de Vos 2019-09-13 12:48:05 UTC
(In reply to Csaba Henk from comment #3)
> Niels, have you seen anything like this?

No, I have not. I can only guess that this is caused by the mount process through systemd, in case it has fewer privileges than running 'mount -a' as root after booting.

The full SElinux output might be helpful, as mentioned in the log:

    For complete SELinux messages run: sealert -l 57d1e82c-8d02-40c5-b0ce-2875219d7a88

Comment 5 Deepu K S 2019-09-13 14:03:47 UTC
(In reply to Niels de Vos from comment #4)

> The full SElinux output might be helpful, as mentioned in the log:
> 
>     For complete SELinux messages run: sealert -l
> 57d1e82c-8d02-40c5-b0ce-2875219d7a88

This is from the client node after a reboot.

Sep 13 19:10:22 example setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . For complete SELinux messages run: sealert -l 4d72b595-53be-47ed-88ff-188a3fc09635
Sep 13 19:10:22 example python: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem .
#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012
If you believe that glusterfsd should be allowed relabelfrom access on the  filesystem by default.#012Then you should report this as a bug.#012
You can generate a local policy module to allow this access.
#012Do#012allow this access for now by executing:#012# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs#012# semodule -i my-glusterfs.pp#012

# sealert -l 4d72b595-53be-47ed-88ff-188a3fc09635
SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem .

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that glusterfsd should be allowed relabelfrom access on the  filesystem by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs
# semodule -i my-glusterfs.pp


Additional Information:
Source Context                system_u:system_r:glusterd_t:s0
Target Context                system_u:object_r:httpd_sys_rw_content_t:s0
Target Objects                 [ filesystem ]
Source                        glusterfs
Source Path                   /usr/sbin/glusterfsd
Port                          <Unknown>
Host                          localhost.localdomain
Source RPM Packages           glusterfs-fuse-3.12.2-47.4.el7.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-252.el7.1.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     example.redhat.com
Platform                      Linux example.redhat.com
                              3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51
                              UTC 2018 x86_64 x86_64
Alert Count                   9
First Seen                    2019-09-09 19:08:31 IST
Last Seen                     2019-09-14 00:40:16 IST
Local ID                      4d72b595-53be-47ed-88ff-188a3fc09635

Raw Audit Messages
type=AVC msg=audit(1568401816.490:110): avc:  denied  { relabelfrom } for  pid=4040 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=0


type=SYSCALL msg=audit(1568401816.490:110): arch=x86_64 syscall=mount success=no exit=EACCES a0=564fd81ae0b0 a1=564fd81ade80 a2=7f9a2344b340 a3=0 items=0 ppid=3886 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=glusterfs exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null)

Hash: glusterfs,glusterd_t,httpd_sys_rw_content_t,filesystem,relabelfrom


Thanks.

Comment 6 Niels de Vos 2019-09-13 16:02:46 UTC
I think it is worth reporting this bug against selinux-policy so that the change can be made there. Comment #2 is not completely clear to me, did creating, loading the policy and rebooting not work? Were the error messages the same?

From my understanding, the relabeling is done implicitly when mounting. No manual relabeling should be needed.

Comment 7 Deepu K S 2019-09-17 12:24:51 UTC
(In reply to Niels de Vos from comment #6)
> I think it is worth reporting this bug against selinux-policy so that the
> change can be made there. 
I'll move this bug to selinux-policy component.

> Comment #2 is not completely clear to me, did
> creating, loading the policy and rebooting not work? Were the error messages
> the same?
Yes. It didn't help.

# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs
# semodule -i my-glusterfs.pp

# semodule -l | grep -i gluster
glusterd	1.1.2
my-glusterfs	1.0

The errors were the same.

> 
> From my understanding, the relabeling is done implicitly when mounting. No
> manual relabeling should be needed.

Comment 8 Zdenek Pytela 2019-09-18 13:13:04 UTC
This issue was not selected to be included in Red Hat Enterprise Linux 7 because it is seen either as low or moderate impact to a small number of use-cases. The next minor release will be in Maintenance Support 1 Phase, which means that qualified Critical and Important Security errata advisories (RHSAs) and Urgent Priority Bug Fix errata advisories (RHBAs) may be released as they become available.

We will now close this issue, but if you believe that it qualifies for the Maintenance Support 1 Phase, please re-open; otherwise, we recommend moving the request to Red Hat Enterprise Linux 8 if applicable.

Comment 9 Deepu K S 2019-09-19 13:16:35 UTC
I have tested this on RHEL 8 and the issue exists there too. Below were the package versions.

glusterfs-3.12.2-40.2.el8.x86_64
glusterfs-fuse-3.12.2-40.2.el8.x86_64
selinux-policy-3.14.1-61.el8.noarch
selinux-policy-targeted-3.14.1-61.el8.noarch

On my RHEL 7, selinux-policy was at
selinux-policy-3.13.1-252.el7.1.noarch
selinux-policy-targeted-3.13.1-252.el7.1.noarch

I'm re-opening this bug as the client machine in use is a RHEL 7.7 . A clone for RHEL 8 is also being opened.

Lukas, could you please have a look if a change needs to be done in selinux-policy.

Comment 11 Lukas Vrabec 2019-09-20 13:47:32 UTC
Hi, 

Could you please put SELinux to permissive mode:

# setenforce 0

THen reproduce your issue
..
..
..

and attach output of:

# ausearch -m AVc -ts boot

Thanks,
Lukas.

Comment 12 Deepu K S 2019-09-23 14:34:36 UTC
(In reply to Lukas Vrabec from comment #11)
> Hi, 
> 
> Could you please put SELinux to permissive mode:
> 
> # setenforce 0
> 
> THen reproduce your issue
> ..
> ..
> ..
> 
> and attach output of:
> 
> # ausearch -m AVc -ts boot
> 
> Thanks,
> Lukas.

Hi Lukas,

Below is the output. I had to set it to permissive in config file as issue occurs at boot.
# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31


# ausearch -m AVc -ts boot
----
time->Mon Sep 23 16:29:15 2019
type=PROCTITLE msg=audit(1569248955.843:116): proctitle=2F7573722F7362696E2F676C75737465726673002D2D61636C002D2D667573652D6D6F756E746F7074733D636F6E746578743D222273797374656D5F753A6F626A6563745F723A68747470645F7379735F72775F636F6E74656E745F743A73302222002D2D766F6C66696C652D7365727665723D676C75737465722D31002D2D
type=SYSCALL msg=audit(1569248955.843:116): arch=c000003e syscall=165 success=yes exit=0 a0=55c485102470 a1=55c485102250 a2=7ff8fe98b300 a3=0 items=0 ppid=1513 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1569248955.843:116): avc:  denied  { mount } for  pid=1564 comm="glusterfs" name="/" dev="fuse" ino=1 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1569248955.843:116): avc:  denied  { relabelfrom } for  pid=1564 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1569248955.843:116): avc:  denied  { relabelto } for  pid=1564 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1569248955.843:116): avc:  denied  { relabelfrom } for  pid=1564 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem permissive=1

Let me know if you require any additional info.

Thanks.