RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1753626 - Unable to mount glusterfs at boot when specifying security context
Summary: Unable to mount glusterfs at boot when specifying security context
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: selinux-policy
Version: 8.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Zdenek Pytela
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On: 1749898
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-19 13:19 UTC by Deepu K S
Modified: 2023-03-24 15:28 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1749898
Environment:
Last Closed: 2020-11-04 01:55:53 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4436861 0 None None None 2019-12-05 10:47:02 UTC
Red Hat Product Errata RHBA-2020:4528 0 None None None 2020-11-04 01:56:19 UTC

Description Deepu K S 2019-09-19 13:19:19 UTC
+++ This bug was initially created as a clone of Bug #1749898 +++

Description of problem:
Gluster client mount fails at boot, if SElinux context is mentioned in mount options.

# cat /etc/fstab | grep -i gluster
gluster-1:/dist-rep-vol3/loco        /var/lib/pulp/content/   glusterfs       defaults,acl,_netdev,x-systemd.automount,x-systemd.device-timeout=10,context="system_u:object_r:httpd_sys_rw_content_t:s0" 0 0

The mount works after login, or if context is not specified.

Version-Release number of selected component (if applicable):
glusterfs-3.12.2-47.4.el7.x86_64
glusterfs-client-xlators-3.12.2-47.4.el7.x86_64
glusterfs-fuse-3.12.2-47.4.el7.x86_64
glusterfs-libs-3.12.2-47.4.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Add a boot persistent gluster volume mount entry in /etc/fstab with an SELinux context.
gluster-1:/dist-rep-vol3/loco        /var/lib/pulp/content/   glusterfs       defaults,acl,_netdev,x-systemd.automount,x-systemd.device-timeout=10,context="system_u:object_r:httpd_sys_rw_content_t:s0" 0 0
2. Reboot the client machine.
3.

Actual results:
Client mount logs :
[2019-09-06 20:22:58.074578] I [MSGID: 100030] [glusterfsd.c:2646:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.2 (args: /usr/sbin/glusterfs --acl --fuse-mountopts=context=""system_u:object_r:httpd_sys_rw_content_t:s0"" --volfile-server=gluster-1 --volfile-id=dist-rep-vol3 --fuse-mountopts=context=""system_u:object_r:httpd_sys_rw_content_t:s0"" --subdir-mount=/loco /var/lib/pulp/content)
[2019-09-06 20:22:58.220974] E [mount.c:444:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2019-09-06 20:22:58.221108] I [mount.c:489:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Permission denied) errno 13
[2019-09-06 20:22:58.221120] E [mount.c:502:gf_fuse_mount] 0-glusterfs-fuse: mount of gluster-1:dist-rep-vol3/loco to /var/lib/pulp/content (context=""system_u:object_r:httpd_sys_rw_content_t:s0"",allow_other,max_read=131072) failed
[2019-09-06 20:22:58.245740] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2019-09-06 20:22:58.315131] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2019-09-06 20:22:58.316241] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1


/var/log/messages :
Sep  7 01:53:01 vm251-241 setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . For complete SELinux messages run: sealert -l 57d1e82c-8d02-40c5-b0ce-2875219d7a88
Sep  7 01:53:01 vm251-241 python: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem .#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#
012If you believe that glusterfsd should be allowed relabelfrom access on the  filesystem by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012D
o#012allow this access for now by executing:#012# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs#012# semodule -i my-glusterfs.pp#012


Expected results:
# mount | grep -i gluster
gluster-1:dist-rep-vol3/loco on /var/lib/pulp/content type fuse.glusterfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,user_id=0,group_id=0,allow_other,max_read=131072)

Additional info:
An older bug was found - https://bugzilla.redhat.com/show_bug.cgi?id=1257234

--- Additional comment from RHEL Product and Program Management on 2019-09-06 17:36:47 UTC ---

Since this bug report was entered in Red Hat Bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from Deepu K S on 2019-09-09 13:16:55 UTC ---

# cat my-glusterfs.te 

module my-glusterfs 1.0;

require {
	type fusefs_t;
	type glusterd_t;
	type httpd_sys_rw_content_t;
	class filesystem { relabelfrom relabelto };
}

#============= glusterd_t ==============

#!!!! This avc is allowed in the current policy
allow glusterd_t fusefs_t:filesystem relabelfrom;
allow glusterd_t httpd_sys_rw_content_t:filesystem relabelto;

# semodule -i my-glusterfs.pp

# semodule -l | grep -i gluster
glusterd	1.1.2
my-glusterfs	1.0

I added this module and ran a relabel, but didn't find any success.

I see this happens only for glusterfs, other local/nfs works for the context option.

# ls -ldZ /mnt/nfs-mount
drwxr-xr-x. root root system_u:object_r:httpd_sys_rw_content_t:s0 /mnt/nfs-mount
# ls -ldZ /var/lib/pulp1/content
drwxr-xr-x. root root system_u:object_r:httpd_sys_rw_content_t:s0 /var/lib/pulp1/content

# df -HT
Filesystem                                       Type      Size  Used Avail Use% Mounted on
/dev/mapper/rhel_vm251--241-root                 xfs        19G  8.6G  9.7G  47% /
example.redhat.com:/opt/nfs-export nfs4       45G   41G  4.7G  90% /mnt/nfs-mount
/dev/sdb                                         xfs        11G   34M   11G   1% /var/lib/pulp1/content
/dev/sda1                                        xfs       1.1G  280M  785M  27% /boot

# mount | egrep "nfs|gluster|sdb"
/dev/sdb on /var/lib/pulp1/content type xfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,attr2,inode64,noquota,_netdev)
example.redhat.com:/opt/nfs-export on /mnt/nfs-mount type nfs4 (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.74.251.241,local_lock=none,addr=10.74.253.234,_netdev)

After running a mount -a; it gets mounted.
gluster-1:dk-dr-v3/loco on /var/lib/pulp/content type fuse.glusterfs (rw,relatime,context=system_u:object_r:httpd_sys_rw_content_t:s0,user_id=0,group_id=0,allow_other,max_read=131072)

May I know why this fails at boot, and works when mounted manually?
Do we need to add this to selinux-policy?

Thanks.

--- Additional comment from Csaba Henk on 2019-09-10 18:05:54 UTC ---

Niels, have you seen anything like this?

--- Additional comment from Niels de Vos on 2019-09-13 12:48:05 UTC ---

(In reply to Csaba Henk from comment #3)
> Niels, have you seen anything like this?

No, I have not. I can only guess that this is caused by the mount process through systemd, in case it has fewer privileges than running 'mount -a' as root after booting.

The full SElinux output might be helpful, as mentioned in the log:

    For complete SELinux messages run: sealert -l 57d1e82c-8d02-40c5-b0ce-2875219d7a88

--- Additional comment from Deepu K S on 2019-09-13 14:03:47 UTC ---

(In reply to Niels de Vos from comment #4)

> The full SElinux output might be helpful, as mentioned in the log:
> 
>     For complete SELinux messages run: sealert -l
> 57d1e82c-8d02-40c5-b0ce-2875219d7a88

This is from the client node after a reboot.

Sep 13 19:10:22 example setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem . For complete SELinux messages run: sealert -l 4d72b595-53be-47ed-88ff-188a3fc09635
Sep 13 19:10:22 example python: SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem .
#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012
If you believe that glusterfsd should be allowed relabelfrom access on the  filesystem by default.#012Then you should report this as a bug.#012
You can generate a local policy module to allow this access.
#012Do#012allow this access for now by executing:#012# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs#012# semodule -i my-glusterfs.pp#012

# sealert -l 4d72b595-53be-47ed-88ff-188a3fc09635
SELinux is preventing /usr/sbin/glusterfsd from relabelfrom access on the filesystem .

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that glusterfsd should be allowed relabelfrom access on the  filesystem by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs
# semodule -i my-glusterfs.pp


Additional Information:
Source Context                system_u:system_r:glusterd_t:s0
Target Context                system_u:object_r:httpd_sys_rw_content_t:s0
Target Objects                 [ filesystem ]
Source                        glusterfs
Source Path                   /usr/sbin/glusterfsd
Port                          <Unknown>
Host                          localhost.localdomain
Source RPM Packages           glusterfs-fuse-3.12.2-47.4.el7.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-252.el7.1.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     example.redhat.com
Platform                      Linux example.redhat.com
                              3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51
                              UTC 2018 x86_64 x86_64
Alert Count                   9
First Seen                    2019-09-09 19:08:31 IST
Last Seen                     2019-09-14 00:40:16 IST
Local ID                      4d72b595-53be-47ed-88ff-188a3fc09635

Raw Audit Messages
type=AVC msg=audit(1568401816.490:110): avc:  denied  { relabelfrom } for  pid=4040 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=0


type=SYSCALL msg=audit(1568401816.490:110): arch=x86_64 syscall=mount success=no exit=EACCES a0=564fd81ae0b0 a1=564fd81ade80 a2=7f9a2344b340 a3=0 items=0 ppid=3886 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=glusterfs exe=/usr/sbin/glusterfsd subj=system_u:system_r:glusterd_t:s0 key=(null)

Hash: glusterfs,glusterd_t,httpd_sys_rw_content_t,filesystem,relabelfrom


Thanks.

--- Additional comment from Niels de Vos on 2019-09-13 16:02:46 UTC ---

I think it is worth reporting this bug against selinux-policy so that the change can be made there. Comment #2 is not completely clear to me, did creating, loading the policy and rebooting not work? Were the error messages the same?

From my understanding, the relabeling is done implicitly when mounting. No manual relabeling should be needed.

--- Additional comment from Deepu K S on 2019-09-17 12:24:51 UTC ---

(In reply to Niels de Vos from comment #6)
> I think it is worth reporting this bug against selinux-policy so that the
> change can be made there. 
I'll move this bug to selinux-policy component.

> Comment #2 is not completely clear to me, did
> creating, loading the policy and rebooting not work? Were the error messages
> the same?
Yes. It didn't help.

# ausearch -c 'glusterfs' --raw | audit2allow -M my-glusterfs
# semodule -i my-glusterfs.pp

# semodule -l | grep -i gluster
glusterd	1.1.2
my-glusterfs	1.0

The errors were the same.

> 
> From my understanding, the relabeling is done implicitly when mounting. No
> manual relabeling should be needed.

--- Additional comment from Zdenek Pytela on 2019-09-18 13:13:04 UTC ---

This issue was not selected to be included in Red Hat Enterprise Linux 7 because it is seen either as low or moderate impact to a small number of use-cases. The next minor release will be in Maintenance Support 1 Phase, which means that qualified Critical and Important Security errata advisories (RHSAs) and Urgent Priority Bug Fix errata advisories (RHBAs) may be released as they become available.

We will now close this issue, but if you believe that it qualifies for the Maintenance Support 1 Phase, please re-open; otherwise, we recommend moving the request to Red Hat Enterprise Linux 8 if applicable.

Comment 1 Deepu K S 2019-09-19 13:25:17 UTC
The errors are same as in RHEL 7.

/var/log/glusterfs/var-lib-pulp-content.log
[2019-09-19 12:53:56.124767] I [MSGID: 100030] [glusterfsd.c:2571:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.2 (args: /usr/sbin/glusterfs --fuse-mountopts=context=""system_u:object_r:httpd_sys_rw_content_t:s0"" --volfile-server=gluster-1 --volfile-id=dk-dr-v3 --fuse-mountopts=context=""system_u:object_r:httpd_sys_rw_content_t:s0"" --subdir-mount=/dk /var/lib/pulp/content)
[2019-09-19 12:53:56.179180] E [mount.c:444:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2019-09-19 12:53:56.179282] I [mount.c:489:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Permission denied) errno 13
[2019-09-19 12:53:56.179293] E [mount.c:502:gf_fuse_mount] 0-glusterfs-fuse: mount of gluster-1:dk-dr-v3/dk to /var/lib/pulp/content (default_permissions,context=""system_u:object_r:httpd_sys_rw_content_t:s0"",allow_other,max_read=131072) failed

gluster-1:/dk-dr-v3/dk        /var/lib/pulp/content/   glusterfs       defaults,_netdev,context="system_u:object_r:httpd_sys_rw_content_t:s0" 0 0

Red Hat Enterprise Linux release 8.0 (Ootpa)

The package versions were;
glusterfs-3.12.2-40.2.el8.x86_64
glusterfs-libs-3.12.2-40.2.el8.x86_64
glusterfs-client-xlators-3.12.2-40.2.el8.x86_64
glusterfs-fuse-3.12.2-40.2.el8.x86_64

selinux-policy-targeted-3.14.1-61.el8.noarch
selinux-policy-3.14.1-61.el8.noarch

Comment 2 Lukas Vrabec 2019-09-20 13:47:00 UTC
Hi, 

Could you please put SELinux to permissive mode:

# setenforce 0

THen reproduce your issue
..
..
..

and attach output of:

# ausearch -m AVc -ts boot

Thanks,
Lukas.

Comment 3 Deepu K S 2019-09-23 14:54:02 UTC
(In reply to Lukas Vrabec from comment #2)
> Hi, 
> 
> Could you please put SELinux to permissive mode:
> 
> # setenforce 0
> 
> THen reproduce your issue
> ..
> ..
> ..
> 
> and attach output of:
> 
> # ausearch -m AVc -ts boot
> 
> Thanks,
> Lukas.

# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      31

# ausearch -m AVc -ts boot
----
time->Mon Sep 23 20:22:43 2019
type=PROCTITLE msg=audit(1569250363.317:31): proctitle=2F7573722F7362696E2F676C75737465726673002D2D667573652D6D6F756E746F7074733D636F6E746578743D222273797374656D5F753A6F626A6563745F723A68747470645F7379735F72775F636F6E74656E745F743A73302222002D2D766F6C66696C652D7365727665723D676C75737465722D31002D2D766F6C66696C
type=SYSCALL msg=audit(1569250363.317:31): arch=c000003e syscall=165 success=yes exit=0 a0=55e788b315f0 a1=55e788b31390 a2=7f3dd19d8eae a3=0 items=0 ppid=1270 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1569250363.317:31): avc:  denied  { mount } for  pid=1275 comm="glusterfs" name="/" dev="fuse" ino=1 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1569250363.317:31): avc:  denied  { relabelfrom } for  pid=1275 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1569250363.317:31): avc:  denied  { relabelto } for  pid=1275 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:httpd_sys_rw_content_t:s0 tclass=filesystem permissive=1
type=AVC msg=audit(1569250363.317:31): avc:  denied  { relabelfrom } for  pid=1275 comm="glusterfs" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem permissive=1

Thanks.

Comment 4 Lukas Vrabec 2019-10-02 15:09:01 UTC
Hi Deepu, 

Is this real use case for glusterd users? Because if yes, we need to allow glusterd_t SELinux domain to relabel all files on the system, which is not ideal, but if it's behaviour of the product, we need to allow it. 

THanks,
Lukas.

Comment 5 Deepu K S 2019-10-04 13:25:20 UTC
(In reply to Lukas Vrabec from comment #4)
> Hi Deepu, 
> 
> Is this real use case for glusterd users? Because if yes, we need to allow
> glusterd_t SELinux domain to relabel all files on the system, which is not
> ideal, but if it's behaviour of the product, we need to allow it. 
> 
> THanks,
> Lukas.

This is a case where Satellite Repository storage point is over gluster. i.e mounted on /var/lib/pulp/content .
I believe the repository mount would require the context="system_u:object_r:httpd_sys_rw_content_t:s0"

I haven't seen any other use cases.

Comment 6 Lukas Vrabec 2019-10-04 14:00:34 UTC
Is it part of some tutorial or documentation? 

THanks,
Lukas.

Comment 7 Deepu K S 2019-10-08 07:56:49 UTC
(In reply to Lukas Vrabec from comment #6)
> Is it part of some tutorial or documentation? 
> 
> THanks,
> Lukas.

The satellite documentation says about SELinux Considerations for NFS Mount.
https://access.redhat.com/documentation/en-us/red_hat_satellite/6.5/html-single/installing_satellite_server_from_a_connected_network/index#storage_requirements

No explicit mention of gluster anywhere, but guidelines say about using XFS or any shared filesystem.
https://docs.pulpproject.org/user-guide/scaling.html#clustering

I'll confirm with my team on this. Keeping the needinfo on me.

Comment 8 Deepu K S 2019-10-09 12:29:51 UTC
Hi Lukas,

There's no direct mention of gluster in Satellite documentation, but there are users using it in their environment.

On a general note, this should affect any httpd based web application that has content shared over gluster. Or simply any application directory which needs to have a specific SELinux context, with the data mounted from a gluster volume.

Thanks.

Comment 9 Lukas Vrabec 2019-10-17 11:01:18 UTC
Deepu, 

I added fixes from Fedora.

commit af4b32d6a17855e1a1dd15a11eb879b82347c6f7 (HEAD -> rawhide, origin/rawhide, origin/HEAD)
Author: Lukas Vrabec <lvrabec>
Date:   Thu Oct 17 12:57:37 2019 +0200

    Allow Gluster mount client to mount files_type
    
    Gluster mount client should have same access like mount_t to mount and
    relabel all files_type types.

Comment 12 suzushrestha 2019-11-19 19:42:11 UTC
@Lukas: I am using glusterfs, server 3.12, has this been resolved yet? its really painful to have multiple issues in gluster. I am also struggling right now to change context in gluster vols mounts, but as a workaround, I am maintaining context based different volumes(many mounts) which satisfy my current need but the inability of gluster to auto mount gluster vols mount(those containing  selinux mount options)  after reboot is painful. please let me know if there is some workaround.

Comment 13 suzushrestha 2019-11-21 19:15:51 UTC
Hello all,

Any feedbacks? waiting for a solution, please advice if any

Comment 20 Zdenek Pytela 2020-06-05 19:44:06 UTC
This commit needs to be backported:
commit af4b32d6a17855e1a1dd15a11eb879b82347c6f7
Author: Lukas Vrabec <lvrabec>
Date:   Thu Oct 17 12:57:37 2019 +0200

    Allow Gluster mount client to mount files_type
    
    Gluster mount client should have same access like mount_t to mount and
    relabel all files_type types.

diff --git a/glusterd.te b/glusterd.te
index 3dc332a31..92a92374d 100644
--- a/glusterd.te
+++ b/glusterd.te
@@ -183,7 +183,9 @@ fs_getattr_all_fs(glusterd_t)
 fs_getattr_all_dirs(glusterd_t)
 
 files_mounton_non_security(glusterd_t)
-
+files_relabel_all_file_type_fs(glusterd_t)
+files_mount_all_file_type_fs(glusterd_t)
+files_unmount_all_file_type_fs(glusterd_t)
 files_dontaudit_read_security_files(glusterd_t)
 files_dontaudit_list_security_dirs(glusterd_t)

Comment 21 Zdenek Pytela 2020-06-08 14:49:48 UTC
https://gitlab.cee.redhat.com/SELinux/selinux-policy/-/merge_requests/54/diffs?commit_id=5084c7021f2ab5003f9008d0b5abfc9f5a910d27commit 5084c7021f2ab5003f9008d0b5abfc9f5a910d27 (HEAD -> rhel8.3-contrib, origin/rhel8.3-contrib)
Author: Lukas Vrabec <lvrabec>
Date:   Thu Oct 17 12:57:37 2019 +0200

    Allow Gluster mount client to mount files_type
    
    Gluster mount client should have same access like mount_t to mount and
    relabel all files_type types.
    
    Resolves: rhbz#1753626

Also cleaning the needinfo flag.

Comment 28 errata-xmlrpc 2020-11-04 01:55:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (selinux-policy bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4528


Note You need to log in before you can comment on or make changes to this bug.