Bug 1369420 - AVC denial message getting related to glusterd in the audit.log
Summary: AVC denial message getting related to glusterd in the audit.log
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Atin Mukherjee
QA Contact: Bala Konda Reddy M
URL:
Whiteboard: rebase
: 1452699 (view as bug list)
Depends On:
Blocks: 1351530 1503135 1542847
TreeView+ depends on / blocked
 
Reported: 2016-08-23 11:27 UTC by Byreddy
Modified: 2018-12-19 15:47 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.12.2-1
Doc Type: Bug Fix
Doc Text:
Previously, when glusterd service was restarted, you could see the AVC denial message on port 61000. With this fix, if you configure the max-port in glusterd.vol below 61000, then the AVC denial message is no longer seen.
Clone Of:
: 1542847 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:29:44 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:31:17 UTC
Red Hat Bugzilla 1449867 None CLOSED [GSS] glusterd fails to start 2019-05-28 12:32:04 UTC
Red Hat Bugzilla 1514098 None CLOSED SELinux support for RHGS WA tracker BZ 2019-05-28 12:32:04 UTC
Red Hat Knowledge Base (Solution) 3754151 None None None 2018-12-19 15:47:05 UTC

Internal Links: 1449867 1514098

Description Byreddy 2016-08-23 11:27:51 UTC
Description of problem:
========================
After updating rhel7.2 with rhgs to 7.3, gettting the below AVC message after node reboot/glusterd restart.

type=AVC msg=audit(1471946614.154:109): avc:  denied  { name_bind } for  pid=2302 comm="glusterd" src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket

This is happening with layered installation as well.


Version-Release number of selected component (if applicable):
===============================================================
RHEL: 7.3 ( 3.10.0-493.el7.x86_64 )
RHGS: glusterfs-3.7.9-10.

How reproducible:
=================
Always


Steps to Reproduce:
===================
1.Have rhel7.2 RHGS 3.1.3 ( 3.7.9-10) node
2.Create simple Distribute volume and start it
3.Update the rhel version from 7.2 to 7.3
4.reboot the node for kernel update
5. Check the audit logs for avc messages (  grep -ri "avc" /var/log/audit/audit.log )

Now, after every glusterd restart, you will see the AVC denial message related to glusterd.

Actual results:
================
Getting the below AVC denial message:
type=AVC msg=audit(1471946614.154:109): avc:  denied  { name_bind } for  pid=2302 comm="glusterd" src=61000 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ephemeral_port_t:s0 tclass=tcp_socket



Expected results:
=================
Should not get the AVC denail message after update to rhel7.3 


Additional info:

Comment 2 Byreddy 2016-08-24 08:19:02 UTC
Some info:
==========
Always getting 61000 src port AVC message after glusterd restart in the audit.log

netstat details on the node:

~]# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:752             0.0.0.0:*               LISTEN      31096/glusterfs     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1814/sshd           
tcp        0      0 0.0.0.0:49178           0.0.0.0:*               LISTEN      29144/glusterfsd    
tcp        0      0 0.0.0.0:2049            0.0.0.0:*               LISTEN      31096/glusterfs     
tcp        0      0 0.0.0.0:38465           0.0.0.0:*               LISTEN      31096/glusterfs     
tcp        0      0 0.0.0.0:38466           0.0.0.0:*               LISTEN      31096/glusterfs     
tcp        0      0 0.0.0.0:16514           0.0.0.0:*               LISTEN      28482/libvirtd      
tcp        0      0 0.0.0.0:5666            0.0.0.0:*               LISTEN      1799/nrpe           
tcp        0      0 0.0.0.0:38468           0.0.0.0:*               LISTEN      31096/glusterfs     
tcp        0      0 0.0.0.0:38469           0.0.0.0:*               LISTEN      31096/glusterfs     
tcp        0      0 0.0.0.0:46405           0.0.0.0:*               LISTEN      31113/rpc.statd     
tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      30956/glusterd      
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp6       0      0 :::45237                :::*                    LISTEN      31113/rpc.statd     
tcp6       0      0 :::22                   :::*                    LISTEN      1814/sshd           
tcp6       0      0 :::16514                :::*                    LISTEN      28482/libvirtd      
tcp6       0      0 :::5666                 :::*                    LISTEN      1799/nrpe           
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
udp        0      0 0.0.0.0:46624           0.0.0.0:*                           1721/dhclient       
udp        0      0 0.0.0.0:625             0.0.0.0:*                           1306/rpcbind        
udp        0      0 0.0.0.0:749             0.0.0.0:*                           31096/glusterfs     
udp        0      0 127.0.0.1:766           0.0.0.0:*                           31113/rpc.statd     
udp        0      0 0.0.0.0:68              0.0.0.0:*                           1721/dhclient       
udp        0      0 0.0.0.0:111             0.0.0.0:*                           1306/rpcbind        
udp        0      0 0.0.0.0:52369           0.0.0.0:*                           31113/rpc.statd     
udp        0      0 127.0.0.1:323           0.0.0.0:*                           1339/chronyd        
udp6       0      0 :::30290                :::*                                1721/dhclient       
udp6       0      0 :::625                  :::*                                1306/rpcbind        
udp6       0      0 :::46047                :::*                                31113/rpc.statd     
udp6       0      0 :::111                  :::*                                1306/rpcbind        
udp6       0      0 ::1:323                 :::*                                1339/chronyd        
[root@ ~]#

Comment 3 Byreddy 2016-08-25 04:20:36 UTC
This issue is there in rhgs rhel7.2 itself.

We no need to do update from rhel7.2 to 7.3 to reproduce this issue.

Just have rhgs rhel7.2 node, create  and start a volume, restart glusterd and check for avc messages in the audit.log

Comment 4 Atin Mukherjee 2016-08-25 06:09:42 UTC
The issue seems like because of the following entry in /proc/sys/net/ipv4/ip_local_port_range

32768   60999

Here the upper cap of the local port is 60999, however gluster maintains it local port range for its portmap logic which is up to 65535 and now when on a glusterd restart portmap table is rebuilt through pmap_registry_new (), bind () fails for 61000 port.

There is an upstream patch http://review.gluster.org/#/c/14613/ posted for review which ensures that gluster depends on local port range details and pick a port from that range instead of maintaining its own. Please note this patch is not committed for 3.2.0. For now given there is no impact to the functionality, this can be marked as known issue and considered as part of future release (probably 3.2.1)

Comment 5 Atin Mukherjee 2016-08-25 06:12:17 UTC
As http://review.gluster.org/#/c/14613/ going to fix this issue, moving the state to POST.

Comment 8 Andrew Spurrier 2016-09-24 11:17:53 UTC
Is there something I can do to convince glusterd to resume to a port less than 61000 now that it has fixated on 61000.

I "was" using version 3.8.4 on Fedora 24.  Only one node is succumb for now.
Thank you.

Comment 10 nchilaka 2016-11-14 10:08:26 UTC
I hit this bug on my systemic setup in 3.2 with 3.8.4-5 build
https://docs.google.com/spreadsheets/d/1iP5Mi1TewBFVh8HTmlcBm9072Bgsbgkr3CLcGmawDys/edit#gid=632186609

Comment 11 Paul Stauffer 2016-12-06 11:53:37 UTC
Also seen after upgrading to glusterfs-3.7.17-1.el7 on CentOS 7.2.1511, although it appears to not have been a fatal error; as near as I can tell, glusterd must have retried with a different port number that fell within the local port range.

Comment 14 Atin Mukherjee 2017-05-19 13:42:51 UTC
*** Bug 1452699 has been marked as a duplicate of this bug. ***

Comment 22 Bala Konda Reddy M 2018-05-02 09:24:21 UTC
Build: 3.12.2-8

Set the max-port to 60999 in glusterd.vol file and restarted glusterd. After that haven't seen any AVC denials in audit.log wrt glusterd

Hence marking it as verified.

Comment 23 Srijita Mukherjee 2018-09-03 15:57:24 UTC
have updated the doc text. Kindly review and confirm

Comment 25 errata-xmlrpc 2018-09-04 06:29:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.