Bug 1326066 - [hc][selinux] AVC denial messages seen in audit.log while starting the volume in HCI environment
Summary: [hc][selinux] AVC denial messages seen in audit.log while starting the volume...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Kaushal
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2 1351522
TreeView+ depends on / blocked
 
Reported: 2016-04-11 17:12 UTC by SATHEESARAN
Modified: 2017-03-23 05:28 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Known Issue
Doc Text:
Cause: TBD Consequence: Workaround (if any): Result:
Clone Of:
Environment:
RHEV-RHGS HCI RHEL 7.2
Last Closed: 2017-03-23 05:28:37 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description SATHEESARAN 2016-04-11 17:12:31 UTC
Description of problem:
-----------------------
In RHEV-RHGS HCI ( Hyperconvergence Infra ), avc denial error messages are seen soon after starting the volume

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 3.1.3 nightly build ( glusterfs-3.7.9-1.el7rhgs )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Install ovirt-hosted-engine-setup package ( from channel rhel-7-server-rhev-mgmt-agent channel )
2. Install gluster server, gluster geo-replication packages and its dependencies
3. Create a gluster replica 3 volume
4. Disable all the hooks, except geo-replication and quota related
5. Optimize the volume for virt-store by tagging the volume with group 'virt' option
6. Start the volume

Actual results:
---------------
avc denial messages seen in audit.log

Expected results:
-----------------
avc denial messages should not be seen

Additional info:
----------------
There is no disruption because of this avc denial

Following are the observations :

[root@ ~]# less /var/log/audit/audit.log | audit2allow


#============= glusterd_t ==============
allow glusterd_t ovirt_vmconsole_host_port_t:tcp_socket name_bind;


avc denial messages as seen in audit.log

<snip>
type=AVC msg=audit(1460020906.616:1060): avc:  denied  { name_bind } for  pid=1757 comm="glusterd" src=2223 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:ovirt_vmconsole_host_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1460020906.616:1060): arch=c000003e syscall=49 success=no exit=-13 a0=f a1=7fdbf41fcf20 a2=10 a3=0 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42949
67295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
</snip>

version of other available packages
------------------------------------
qemu-img-rhev-2.3.0-31.el7_2.10.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64
ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch
qemu-kvm-tools-rhev-2.3.0-31.el7_2.10.x86_64
qemu-kvm-common-rhev-2.3.0-31.el7_2.10.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64

libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-60.el7_2.3.noarch
libselinux-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch

vdsm-infra-4.17.23.2-1.1.el7ev.noarch
vdsm-hook-vmfex-dev-4.17.23.2-1.1.el7ev.noarch
vdsm-cli-4.17.23.2-1.1.el7ev.noarch
vdsm-python-4.17.23.2-1.1.el7ev.noarch
vdsm-jsonrpc-4.17.23.2-1.1.el7ev.noarch
vdsm-4.17.23.2-1.1.el7ev.noarch
vdsm-gluster-4.17.23.2-1.1.el7ev.noarch
vdsm-xmlrpc-4.17.23.2-1.1.el7ev.noarch
vdsm-yajsonrpc-4.17.23.2-1.1.el7ev.noarch

gluster volume info
-------------------
[root@ ~]# gluster volume info
 
Volume Name: enginevol
Type: Replicate
Volume ID: b86f0bf4-7b9f-479d-8951-1b3cc70d6691
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: x.redhat.com:/home/engine/br1
Brick2: y.redhat.com:/home/engine/br1
Brick3: z.redhat.com:/home/engine/br1
Options Reconfigured:
network.ping-timeout: 10
nfs.disable: on
performance.low-prio-threads: 32
storage.owner-gid: 36
storage.owner-uid: 36
cluster.data-self-heal-algorithm: full
features.shard-block-size: 512MB
features.shard: enable
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on

Comment 1 Kaushal 2016-04-12 07:16:44 UTC
I checked the logs of the system on which this occurred. The AVC denial happened at the moment GlusterD had a pmap_signin event. The signin event caused the portmapper table to be initialized.

When the portmap table is initialized, GlusterD tries to find out all available free ports on the system by trying to bind to ports from 0 to 65536. As a part of it, it also tries to bind 2223, which causes the AVC denial audit log.

This is doesn't affect the functioning of GlusterFS in any way, and the AVC denial message is benign.

Comment 2 Atin Mukherjee 2016-04-12 07:23:21 UTC
I do remember a patch from Raghavendra Talur in upstream [1] where the lower limit is set to 49152. We shouldn't be seeing this issue once the patch gets rebased as part of 3.8 and eventually rhgs-3.1.2.

I am setting internal whiteboard to 3.2, any objection?


[1] http://review.gluster.org/13841

Comment 3 SATHEESARAN 2016-04-12 07:29:21 UTC
(In reply to Atin Mukherjee from comment #2)
> I do remember a patch from Raghavendra Talur in upstream [1] where the lower
> limit is set to 49152. We shouldn't be seeing this issue once the patch gets
> rebased as part of 3.8 and eventually rhgs-3.1.2.
> 
> I am setting internal whiteboard to 3.2, any objection?
> 
> 
> [1] http://review.gluster.org/13841

Looks ok to move this fix to 3.2
I mark this issue as a known_issue for RHEV-RHGS HCI LA release ( RHGS 3.1.3 ), so that admins should not misinterpret this AVC to a serious harm.

Comment 5 Atin Mukherjee 2016-09-17 14:54:39 UTC
Upstream mainline : http://review.gluster.org/13841
Upstream 3.8 : Available as part of branching from mainline

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 11 SATHEESARAN 2016-10-25 08:01:03 UTC
No AVC denials are seen with RHGS 3.2.0 interim build ( glusterfs-3.8.4-2.el7rhgs ).


[root@ ~]# less /var/log/audit/audit.log
audit.log    audit.log.1  audit.log.2  audit.log.3  audit.log.4  

[root@ ~]# less /var/log/audit/audit.log | audit2allow 
Nothing to do

[root@ ~]# less /var/log/audit/audit.log.1 | audit2allow 
Nothing to do

[root@ ~]# less /var/log/audit/audit.log.2 | audit2allow 
Nothing to do

[root@ ~]# less /var/log/audit/audit.log.3 | audit2allow 
Nothing to do

[root@ ~]# less /var/log/audit/audit.log.4 | audit2allow 
Nothing to do

Comment 13 errata-xmlrpc 2017-03-23 05:28:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.