Bug 980683 - AVC denial with parsing the volfile failed message
AVC denial with parsing the volfile failed message
Status: CLOSED ERRATA
Product: Fedora
Classification: Fedora
Component: selinux-policy (Show other bugs)
19
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Miroslav Grepl
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-02 22:49 EDT by Chris Murphy
Modified: 2013-07-20 05:33 EDT (History)
9 users (show)

See Also:
Fixed In Version: selinux-policy-3.12.1-65.fc19
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-20 05:33:10 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
qemu command line to start the VM (1.48 KB, text/plain)
2013-07-02 22:49 EDT, Chris Murphy
no flags Details
f19sv1.xml (3.49 KB, text/plain)
2013-07-02 22:50 EDT, Chris Murphy
no flags Details

  None (edit)
Description Chris Murphy 2013-07-02 22:49:49 EDT
Created attachment 768044 [details]
qemu command line to start the VM

Description of problem:
In a qemu/kvm VM, after installing glusterfs-server and geo-replication but configuring nothing else, I'm getting an AVC denial for glusterfsd, systemd reports glusterd.service entered failed state, and glusterfsd reports ERROR: parsing the volfile failed.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0-0.6.beta3.fc19.x86_64
glusterfs-api-3.4.0-0.6.beta3.fc19.x86_64
glusterfs-fuse-3.4.0-0.6.beta3.fc19.x86_64
glusterfs-geo-replication-3.4.0-0.6.beta3.fc19.x86_64
glusterfs-server-3.4.0-0.6.beta3.fc19.x86_64
kernel-3.9.8-300.fc19.x86_64

The host is also running Fedora 19 with just these items:
glusterfs-fuse-3.4.0-0.6.beta3.fc19.x86_64
glusterfs-3.4.0-0.6.beta3.fc19.x86_64
glusterfs-api-3.4.0-0.6.beta3.fc19.x86_64
kernel-3.10.0-1.fc20.x86_64


How reproducible:
Always, every boot.

Steps to Reproduce:
1. Fedora 19 host, default installation from DVD, adding from koji kernel-3.10.0-1.fc20.x86_64, virsh, virt-manager.

2. Fedora 19 qemu/vm, netinst using Infrastructure Server package, adding glusterfs, glusterfs-geo-replication, glusterfs-server.

3. Boot the VM.

Actual results:

[root@f19sv1 ~]# ausearch -m AVC
----
time->Tue Jul  2 17:09:22 2013
type=SYSCALL msg=audit(1372806562.179:32): arch=c000003e syscall=49 success=no exit=-13 a0=9 a1=7fece95a92f8 a2=10 a3=7fffacc1f41c items=0 ppid=241 pid=248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1372806562.179:32): avc:  denied  { name_bind } for  pid=248 comm="glusterd" src=24007 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket



[root@f19sv1 ~]# journalctl -b | grep gluster
Jul 02 17:09:22 f19sv1.local systemd[1]: glusterd.service: control process exited, code=exited status=1
Jul 02 17:09:22 f19sv1.local systemd[1]: Unit glusterd.service entered failed state.
Jul 02 17:09:22 f19sv1.local glusterfsd[253]: [2013-07-02 23:09:22.289434] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory)
Jul 02 17:09:22 f19sv1.local GlusterFS[253]: [2013-07-02 23:09:22.289434] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory)
Jul 02 17:09:22 f19sv1.local glusterfsd[253]: USAGE: /usr/sbin/glusterfsd [options] [mountpoint]
Jul 02 17:09:22 f19sv1.local systemd[1]: glusterfsd.service: control process exited, code=exited status=255
Jul 02 17:09:22 f19sv1.local systemd[1]: Unit glusterfsd.service entered failed state.


Expected results:
I'm not expecting the AVC denial in any case. Uncertain if that's the cause of the glusterd.service failure, or if the service failure causes a subsequent AVC denial.
Comment 1 Chris Murphy 2013-07-02 22:50:58 EDT
Created attachment 768045 [details]
f19sv1.xml

The xml file from 'virsh dumpxml <vmname>'
Comment 2 Niels de Vos 2013-07-03 12:40:38 EDT
glusterd.service fails to start glusterd (the management daemon) successfully
because it can not listen on tcp/24007.

sealert tells me that there is no glusterfs_port_t that can be used to allow
this particular access. This should probably be added to the selinux-policy.

However, only allowing listening on tcp/24007 is not sufficient. There are many more requirements to fullfill before Gluster can work completely with SElinux in Enforcing mode:
- Gluster will also execute rpc.statd as part of the integrated NFS-server
- glusterfsd processes for the exported bricks will listen on tcp/49152 and
  higher (each brick will the port +1)
- Gluster comes with its own replacement for rpc.mountd and NLM, listening on
  tcp/38465, tcp/38466, tcp/38468, tcp/38469 and a privileged tcp port and a
  privileged udp port
- UNIX domain sockets are used for communication, these are located under /run
  and have a <hash>.socket name
- ... probably more


SELinux is preventing /usr/sbin/glusterfsd from name_bind access on the tcp_socket .

*****  Plugin bind_ports (92.2 confidence) suggests  *************************

If you want to allow /usr/sbin/glusterfsd to bind to network port 24007
Then you need to modify the port type.
Do
# semanage port -a -t PORT_TYPE -p tcp 24007
    where PORT_TYPE is one of the following: agentx_port_t, apertus_ldp_port_t, audit_port_t, auth_port_t, bgp_port_t, chronyd_port_t, comsat_port_t, dhcpc_port_t, dhcpd_port_t, dns_port_t, echo_port_t, efs_port_t, epmap_port_t, fingerd_port_t, flash_port_t, ftp_data_port_t, ftp_port_t, gopher_port_t, hi_reserved_port_t, http_port_t, inetd_child_port_t, innd_port_t, ipmi_port_t, ipp_port_t, isakmp_port_t, kerberos_admin_port_t, kerberos_password_port_t, kerberos_port_t, kprop_port_t, ktalkd_port_t, ldap_port_t, lmtp_port_t, nmbd_port_t, ntp_port_t, openshift_port_t, pop_port_t, portmap_port_t, printer_port_t, reserved_port_t, rlogind_port_t, rndc_port_t, router_port_t, rsh_port_t, rsync_port_t, rtsp_port_t, rwho_port_t, smbd_port_t, smtp_port_t, snmp_port_t, spamd_port_t, ssh_port_t, svrloc_port_t, swat_port_t, syslogd_port_t, telnetd_port_t, tftp_port_t, time_port_t, uucpd_port_t, whois_port_t, xdmcp_port_t, zarafa_port_t.

*****  Plugin catchall (100. confidence) suggests  ***************************

If you believe that rpc.statd should have the setuid capability by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep rpc.statd /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp


Additional Information:
Source Context                system_u:system_r:glusterd_t:s0
Target Context                system_u:system_r:glusterd_t:s0
Target Objects                 [ capability ]
Source                        rpc.statd
Source Path                   /usr/sbin/rpc.statd
Port                          <Unknown>
Host                          vm130-40.seg1.gsslab.fab.redhat.com
Source RPM Packages           nfs-utils-1.2.8-2.0.fc19.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.12.1-54.fc19.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Permissive
Host Name                     vm130-40.seg1.gsslab.fab.redhat.com
Platform                      Linux vm130-40.seg1.gsslab.fab.redhat.com
                              3.9.8-300.fc19.x86_64 #1 SMP Thu Jun 27 19:24:23
                              UTC 2013 x86_64 x86_64
Alert Count                   1
First Seen                    2013-07-03 16:18:43 BST
Last Seen                     2013-07-03 16:18:43 BST
Local ID                      6fc465d9-406f-421e-bae2-763765d69261

Raw Audit Messages
type=AVC msg=audit(1372864723.282:573): avc:  denied  { setuid } for  pid=1017 comm="rpc.statd" capability=7  scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=capability


type=SYSCALL msg=audit(1372864723.282:573): arch=x86_64 syscall=setuid success=yes exit=0 a0=1d a1=1d a2=7f04cf1012e0 a3=7f04cf1012e0 items=0 ppid=1016 pid=1017 auid=4294967295 uid=29 gid=29 euid=29 suid=29 fsuid=29 egid=29 sgid=29 fsgid=29 ses=4294967295 tty=(none) comm=rpc.statd exe=/usr/sbin/rpc.statd subj=system_u:system_r:glusterd_t:s0 key=(null)

Hash: rpc.statd,glusterd_t,glusterd_t,capability,setuid

SELinux is preventing /usr/sbin/rpc.statd from using the setcap access on a process.

*****  Plugin catchall (100. confidence) suggests  ***************************

If you believe that rpc.statd should be allowed setcap access on processes labeled glusterd_t by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep rpc.statd /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp


Additional Information:
Source Context                system_u:system_r:glusterd_t:s0
Target Context                system_u:system_r:glusterd_t:s0
Target Objects                 [ process ]
Source                        rpc.statd
Source Path                   /usr/sbin/rpc.statd
Port                          <Unknown>
Host                          vm130-40.seg1.gsslab.fab.redhat.com
Source RPM Packages           nfs-utils-1.2.8-2.0.fc19.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.12.1-54.fc19.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Permissive
Host Name                     vm130-40.seg1.gsslab.fab.redhat.com
Platform                      Linux vm130-40.seg1.gsslab.fab.redhat.com
                              3.9.8-300.fc19.x86_64 #1 SMP Thu Jun 27 19:24:23
                              UTC 2013 x86_64 x86_64
Alert Count                   1
First Seen                    2013-07-03 16:18:43 BST
Last Seen                     2013-07-03 16:18:43 BST
Local ID                      7b97093a-ed41-421a-a241-9f8b88ac489c

Raw Audit Messages
type=AVC msg=audit(1372864723.282:574): avc:  denied  { setcap } for  pid=1017 comm="rpc.statd" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=process


type=SYSCALL msg=audit(1372864723.282:574): arch=x86_64 syscall=capset success=yes exit=0 a0=7f04d2204594 a1=7f04d220459c a2=7f04d0aa81e3 a3=7fff33e3f1e0 items=0 ppid=1016 pid=1017 auid=4294967295 uid=29 gid=29 euid=29 suid=29 fsuid=29 egid=29 sgid=29 fsgid=29 ses=4294967295 tty=(none) comm=rpc.statd exe=/usr/sbin/rpc.statd subj=system_u:system_r:glusterd_t:s0 key=(null)

Hash: rpc.statd,glusterd_t,glusterd_t,process,setcap
Comment 3 Miroslav Grepl 2013-07-12 05:04:11 EDT
Thank you for your description. I added some fixes to selinux-policy-3.12.1-64.fc19.

So we will see if we get more AVC msgs.
Comment 4 Fedora Update System 2013-07-17 07:52:50 EDT
selinux-policy-3.12.1-65.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/selinux-policy-3.12.1-65.fc19
Comment 5 Chris Murphy 2013-07-17 18:38:43 EDT
Still a problem with:
selinux-policy-3.12.1-65.fc19.noarch
glusterfs-3.4.0-0.9.beta4.fc19.x86_64

time->Wed Jul 17 16:32:05 2013
type=SYSCALL msg=audit(1374100325.438:29): arch=c000003e syscall=49 success=yes exit=0 a0=9 a1=7f0c1d6f92f8 a2=10 a3=7fff89fcd5bc items=0 ppid=239 pid=247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1374100325.438:29): avc:  denied  { name_bind } for  pid=247 comm="glusterd" src=24007 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket


-- Logs begin at Tue 2013-07-02 15:35:43 MDT, end at Wed 2013-07-17 16:34:48 MDT. --
Jul 17 16:34:30 f19sv1.local systemd[1]: Starting GlusterFS an clustered file-system server...
Jul 17 16:34:30 f19sv1.local glusterfsd[282]: [2013-07-17 22:34:30.434788] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory)
Jul 17 16:34:30 f19sv1.local GlusterFS[282]: [2013-07-17 22:34:30.434788] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory)
Jul 17 16:34:30 f19sv1.local glusterfsd[282]: USAGE: /usr/sbin/glusterfsd [options] [mountpoint]
Jul 17 16:34:30 f19sv1.local systemd[1]: glusterfsd.service: control process exited, code=exited status=255
Jul 17 16:34:30 f19sv1.local systemd[1]: Failed to start GlusterFS an clustered file-system server.
Jul 17 16:34:30 f19sv1.local systemd[1]: Unit glusterfsd.service entered failed state.
Comment 6 Chris Murphy 2013-07-17 18:42:39 EDT
After 'restorecon -R -v /' I no longer get the avc message with ausearch -m avc. However, I still get the failure to start message with journalctl -b -u glusterfsd.
Comment 7 Fedora Update System 2013-07-18 02:00:15 EDT
Package selinux-policy-3.12.1-65.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing selinux-policy-3.12.1-65.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-13172/selinux-policy-3.12.1-65.fc19
then log in and leave karma (feedback).
Comment 8 Fedora Update System 2013-07-20 05:33:10 EDT
selinux-policy-3.12.1-65.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.

Note You need to log in before you can comment on or make changes to this bug.