RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1593130 - SELinux is preventing /usr/sbin/glusterfsd from map access on the file under /var/tmp
Summary: SELinux is preventing /usr/sbin/glusterfsd from map access on the file under ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.6
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: beta
: 7.6
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-20 07:28 UTC by Han Han
Modified: 2018-10-30 10:06 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 10:05:46 UTC
Target Upstream Version:
Embargoed:
yanqzhan: needinfo-


Attachments (Terms of Use)
syslog_audit_gluster-vol-start-fail-3.12.2-13 (9.68 KB, application/zip)
2018-07-12 10:11 UTC, Yanqiu Zhang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3111 0 None None None 2018-10-30 10:06:45 UTC

Description Han Han 2018-06-20 07:28:06 UTC
Description of problem:
As subject

Version-Release number of selected component (if applicable):
glusterfs-fuse-3.8.4-54.12.el7rhgs.x86_64
glusterfs-3.8.4-54.12.el7rhgs.x86_64
selinux-policy-3.13.1-204.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1. Install setroubleshoot-server and then reload auditd.
2. Try to setup a gluster server
# mkdir -p /var/tmp/gv1
# gluster volume create gv1 transport tcp  `hostname`:/var/tmp/gv1  force                                                                            
volume create: gv1: success: please start the volume to access data
# gluster volume start gv1
volume start: gv1: failed: Commit failed on localhost. Please check log file for details.

3. Check /var/log/messages and selinux messages
# grep setr /var/log/messages
Jun 20 03:11:02 hp-dl320eg8-12 setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from map access on the file /var/tmp/gv1/.glusterfs/gv1.db-shm. For complete SELin
ux messages run: sealert -l a6597d31-6524-43cf-8fe8-d3c828d45716

# sealert -l a6597d31-6524-43cf-8fe8-d3c828d45716
SELinux is preventing /usr/sbin/glusterfsd from map access on the file /var/tmp/gv1/.glusterfs/gv1.db-shm.

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that glusterfsd should be allowed map access on the gv1.db-shm file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'glusterfsd' --raw | audit2allow -M my-glusterfsd
# semodule -i my-glusterfsd.pp


Additional Information:
Source Context                system_u:system_r:glusterd_t:s0
Target Context                system_u:object_r:user_tmp_t:s0
Target Objects                /var/tmp/gv1/.glusterfs/gv1.db-shm [ file ]
Source                        glusterfsd
Source Path                   /usr/sbin/glusterfsd
Port                          <Unknown>
Host                          hp-dl320eg8-12.lab.eng.pek2.redhat.com
Source RPM Packages           
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-204.el7.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     hp-dl320eg8-12.lab.eng.pek2.redhat.com
Platform                      Linux hp-dl320eg8-12.lab.eng.pek2.redhat.com
                              3.10.0-897.el7.x86_64 #1 SMP Fri Jun 1 06:53:19
                              EDT 2018 x86_64 x86_64
Alert Count                   1
First Seen                    2018-06-20 03:10:59 EDT
Last Seen                     2018-06-20 03:10:59 EDT
Local ID                      a6597d31-6524-43cf-8fe8-d3c828d45716

Raw Audit Messages
type=AVC msg=audit(1529478659.296:75983): avc:  denied  { map } for  pid=17144 comm="glusterfsd" path="/var/tmp/gv1/.glusterfs/gv1.db-shm" dev="dm-0" ino=101174444 scontext=system_u:system_r:glusterd_t:s0 tcont
ext=system_u:object_r:user_tmp_t:s0 tclass=file permissive=0


Hash: glusterfsd,glusterd_t,user_tmp_t,file,map

Actual results:
As step2&3

Expected results:
Glusterfs started in step2

Additional info:
Bug not reproduced on selinux-policy-3.13.1-204.el7.noarch glusterfs-fuse-3.12.2-11.el7rhgs.x86_64

Comment 11 Yanqiu Zhang 2018-07-12 09:54:39 UTC
This issue reproduced in latest libvirt auto testing, but it only fails the first start, if there is a second volume start under the directory again, will succeed.

Pkgs version:
libvirt-4.5.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64
kernel-3.10.0-919.el7.x86_64

# rpm -qa|grep gluster
glusterfs-libs-3.12.2-13.el7rhgs.x86_64
glusterfs-cli-3.12.2-13.el7rhgs.x86_64
glusterfs-server-3.12.2-13.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-2.el7.x86_64
glusterfs-3.12.2-13.el7rhgs.x86_64
glusterfs-api-3.12.2-13.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-13.el7rhgs.x86_64
glusterfs-fuse-3.12.2-13.el7rhgs.x86_64

Steps to reproduce:
0. Add "option rpc-auth-allow-insecure on" into /etc/glusterfs/glusterd.vol and Start the glusterd service.
# service glusterd restart

Scenario 1:
1. # gluster volume create gluster-vol1 `hostname`:/br1 force
# gluster volume set gluster-vol1 server.allow-insecure on
# gluster volume info
Volume Name: gluster-vol1
Type: Distribute
Volume ID: 7e186918-0467-4c45-a38a-ba2a12fbc4c9
Status: Created
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: $hostname:/br1
Options Reconfigured:
server.allow-insecure: on
transport.address-family: inet
nfs.disable: on

2. # gluster volume start gluster-vol1
volume start: gluster-vol1: failed: Commit failed on localhost. Please check log file for details.

#  grep "setroubleshoot:" gluster-vol-start-fail/br1/messages 
Jul 12 04:48:25 lenovo-*** setroubleshoot: SELinux is preventing /usr/sbin/rsyslogd from unlink access on the file imjournal.state. For complete SELinux messages run: sealert -l b85a8d21-6612-497b-86d2-4bf1f6854659
Jul 12 04:49:33 lenovo-*** setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from name_bind access on the tcp_socket port 61000. For complete SELinux messages run: sealert -l 26701082-9505-4b95-bf97-1dc0336e5f77
Jul 12 04:49:36 lenovo-*** setroubleshoot: SELinux is preventing glusterepoll0 from map access on the file /br1/.glusterfs/br1.db-shm. For complete SELinux messages run: sealert -l 6b249231-0a42-4451-868e-ed1522300ef9
...
# sealert -l 6b249231-0a42-4451-868e-ed1522300ef9
SELinux is preventing glusterepoll0 from map access on the file /br1/.glusterfs/br1.db-shm.

*****  Plugin restorecon (99.5 confidence) suggests   ************************

If you want to fix the label. 
/br1/.glusterfs/br1.db-shm default label should be default_t.
Then you can run restorecon. The access attempt may have been stopped due to insufficient permissions to access a parent directory in which case try to change the following command accordingly.
Do
# /sbin/restorecon -v /br1/.glusterfs/br1.db-shm

*****  Plugin catchall (1.49 confidence) suggests   **************************

If you believe that glusterepoll0 should be allowed map access on the br1.db-shm file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'glusterepoll0' --raw | audit2allow -M my-glusterepoll0
# semodule -i my-glusterepoll0.pp


Additional Information:
Source Context                system_u:system_r:glusterd_t:s0
Target Context                system_u:object_r:root_t:s0
Target Objects                /br1/.glusterfs/br1.db-shm [ file ]
Source                        glusterepoll0
Source Path                   glusterepoll0
Port                          <Unknown>
Host                          lenovo-***
Source RPM Packages           
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-207.el7.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     lenovo-***
Platform                      Linux lenovo-***
                              3.10.0-919.el7.x86_64 #1 SMP Wed Jul 4 10:42:36
                              EDT 2018 x86_64 x86_64
Alert Count                   2
First Seen                    2018-07-12 04:49:32 EDT
Last Seen                     2018-07-12 04:49:32 EDT
Local ID                      6b249231-0a42-4451-868e-ed1522300ef9

Raw Audit Messages
type=AVC msg=audit(1531385372.302:3269): avc:  denied  { map } for  pid=112002 comm="glusterepoll0" path="/br1/.glusterfs/br1.db-shm" dev="dm-0" ino=67281044 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0


Hash: glusterepoll0,glusterd_t,root_t,file,map

# grep "avc:  denied" gluster-vol-start-fail/br1/audit.log 
type=AVC msg=audit(1531385305.050:3265): avc:  denied  { unlink } for  pid=111332 comm="in:imjournal" name="imjournal.state" dev="dm-0" ino=53094 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0
type=AVC msg=audit(1531385372.084:3267): avc:  denied  { name_bind } for  pid=111876 comm="glustersproc1" src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket permissive=0
type=AVC msg=audit(1531385372.302:3268): avc:  denied  { map } for  pid=112002 comm="glusterepoll0" path="/br1/.glusterfs/br1.db-shm" dev="dm-0" ino=67281044 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0
...

3.# gluster volume start gluster-vol1
volume start: gluster-vol1: success


Scenario 2:
1. # mkdir -p /var/tmp/gv1
# gluster volume create gv1 transport tcp  `hostname`:/var/tmp/gv1  force
# gluster volume info
Volume Name: gv1
Type: Distribute
Volume ID: 114f0704-8ee4-467d-9f9d-06465e09301d
Status: Created
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: lenovo-***:/var/tmp/gv1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

2.# gluster volume start gv1
volume start: gv1: failed: Commit failed on localhost. Please check log file for details.

3.#  gluster volume start gv1
volume start: gv1: success

#  grep "setroubleshoot:" gluster-vol-start-fail/gv1/messages 
Jul 12 04:55:45 lenovo-*** setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from map access on the file /var/tmp/gv1/.glusterfs/gv1.db-shm. For complete SELinux messages run: sealert -l 671147f4-c0c3-4a1b-83b1-9d466d2a84c9
...
Jul 12 04:55:48 lenovo-*** setroubleshoot: SELinux is preventing /usr/sbin/rsyslogd from unlink access on the file imjournal.state. For complete SELinux messages run: sealert -l b85a8d21-6612-497b-86d2-4bf1f6854659
...

# grep "avc:  denied" gluster-vol-start-fail/gv1/audit.log 
type=AVC msg=audit(1531385741.742:3272): avc:  denied  { map } for  pid=112447 comm="glusterepoll0" path="/var/tmp/gv1/.glusterfs/gv1.db-shm" dev="dm-0" ino=58741691 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:user_tmp_t:s0 tclass=file permissive=0
...
type=AVC msg=audit(1531385745.229:3274): avc:  denied  { unlink } for  pid=111332 comm="in:imjournal" name="imjournal.state" dev="dm-0" ino=53094 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0
...

Comment 12 Yanqiu Zhang 2018-07-12 10:11:55 UTC
Created attachment 1458336 [details]
syslog_audit_gluster-vol-start-fail-3.12.2-13

Comment 13 Yanqiu Zhang 2018-07-12 10:15:13 UTC
And the selinux version fyi:
selinux-policy-3.13.1-207.el7.noarch

Comment 16 Csaba Henk 2018-07-13 11:57:47 UTC
(In reply to yanqzhan from comment #11)
> This issue reproduced in latest libvirt auto testing, but it only fails the
> first start, if there is a second volume start under the directory again,
> will succeed.
> 
> Pkgs version:
> libvirt-4.5.0-2.el7.x86_64
> qemu-kvm-rhev-2.12.0-7.el7.x86_64
> kernel-3.10.0-919.el7.x86_64
> 
> # rpm -qa|grep gluster
> glusterfs-libs-3.12.2-13.el7rhgs.x86_64
> glusterfs-cli-3.12.2-13.el7rhgs.x86_64
> glusterfs-server-3.12.2-13.el7rhgs.x86_64
> libvirt-daemon-driver-storage-gluster-4.5.0-2.el7.x86_64
> glusterfs-3.12.2-13.el7rhgs.x86_64
> glusterfs-api-3.12.2-13.el7rhgs.x86_64
> glusterfs-client-xlators-3.12.2-13.el7rhgs.x86_64
> glusterfs-fuse-3.12.2-13.el7rhgs.x86_64

I can confirm that glusterfs-3.12.2-13 does not have the selinux hook scripts.

Comment 19 Xuesong Zhang 2018-07-19 05:35:04 UTC
Change the BZ to selinux component after analyzing the actual result, following is the info for devel reference to debug the issue. Thx.

1. RHEL7.5 + Gluster3.3, PASS (test before, but not remember the detail version for selinux and gluster now).
2. RHEL7.5(selinux-policy-3.13.1-192.el7_5.5) + Gluster 3.4(glusterfs-3.12.2-13), PASS
3. RHEL7.6(selinux-policy-3.13.1-204) + Gluster 3.3(glusterfs-3.8.4-54.12), FAIL
4. RHEL7.6(selinux-policy-3.13.1-204) + Gluster 3.4(glusterfs-3.12.2-13), FAIL

Comment 24 Yanqiu Zhang 2018-07-24 07:12:22 UTC
Set 'Regression' keyword per comment 19.

Comment 30 errata-xmlrpc 2018-10-30 10:05:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3111


Note You need to log in before you can comment on or make changes to this bug.