Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1110263

Summary: neutron-ns-metadata-proxy can't talk back to neutron-metadata-agent via unix socket
Product: Red Hat OpenStack Reporter: Miguel Angel Ajo <majopela>
Component: openstack-selinuxAssignee: Lon Hohberger <lhh>
Status: CLOSED ERRATA QA Contact: Ofer Blaut <oblaut>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 5.0 (RHEL 7)CC: lhh, ltoscano, mgrepl, nyechiel, oblaut, rhallise, sclewis, yeylon, yfried
Target Milestone: rc   
Target Release: 5.0 (RHEL 7)   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-selinux-0.5.9-1.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-07-08 15:15:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Selinux policy fix
lhh: review? (mgrepl)
the output of: ps -efZ
none
/var/log/audit/audit.log permissive mode none

Description Miguel Angel Ajo 2014-06-17 10:29:53 UTC
Description of problem:

VM instances won't be able to retrieve metadata.


on the network node you will find:

1) /var/log/neutron/neutron-ns-metadata-proxy*.log

2014-06-17 10:18:29.188 6486 TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 190, in connect
2014-06-17 10:18:29.188 6486 TRACE neutron.agent.metadata.namespace_proxy     while not socket_connect(fd, address):
2014-06-17 10:18:29.188 6486 TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 39, in socket_connect
2014-06-17 10:18:29.188 6486 TRACE neutron.agent.metadata.namespace_proxy     raise socket.error(err, errno.errorcode[err])
2014-06-17 10:18:29.188 6486 TRACE neutron.agent.metadata.namespace_proxy error: [Errno 13] EACCES


2) /var/log/messages 
Jun 17 09:37:58 networker kernel: type=1400 audit(1402997878.132:137): avc:  denied  { connectto } for  pid=5372 comm="neutron-ns-meta" path="/var/lib/neutron/metadata_proxy" scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket

3) /var/log/audit/audit.log

type=AVC msg=audit(1403000309.187:1174): avc:  denied  { connectto } for  pid=6486 comm="neutron-ns-meta" path="/var/lib/neutron/metadata_proxy" scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket
type=SYSCALL msg=audit(1403000309.187:1174): arch=c000003e syscall=42 success=no exit=-13 a0=8 a1=7fff6030e5e0 a2=21 a3=0 items=0 ppid=1 pid=6486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="neutron-ns-meta" exe="/usr/bin/python2.7" subj=system_u:system_r:neutron_t:s0 key=(null)



Version-Release number of selected component (if applicable):

selinux-policy-targeted-3.12.1-153.el7_0.10.noarch
selinux-policy-3.12.1-153.el7_0.10.noarch

How reproducible:

100%

Steps to Reproduce:
1. set selinux enforcing on network node
2. start a VM with metadata access (cirros for examples)

Actual results:

The VM fails in getting metadata (vm log for cirros):

failed 20/20: up 40.25. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 42.26. searched: nocloud configdrive ec2
failed to get instance-id of datasource


Expected results:

Working metadata

Additional info:

[root@networker ~]# cat /var/log/audit/audit.log | audit2allow  -R

require {
	type neutron_t;
	type init_t;
	class unix_stream_socket connectto;
}

#============= neutron_t ==============
allow neutron_t init_t:unix_stream_socket connectto;

Comment 3 Lon Hohberger 2014-06-17 14:13:22 UTC
Need:

# ps -efZ 

Do we know what creates this socket file (e.g. puppet)?

Also, can you do ls -laZ /var/lib/neutron?

Comment 4 Lon Hohberger 2014-06-17 14:54:21 UTC
My feeling is that this is being created with the wrong context somehow - not that we should be allowing neutron access to initrc_t sockets unilaterally.

Comment 5 Lon Hohberger 2014-06-17 15:39:38 UTC
Ah ha. I think neutron-ns-metadata-proxy is the wrong context.  I'll figure out the right one and add it to %post.

Comment 6 Lon Hohberger 2014-06-17 15:58:09 UTC
Created attachment 909678 [details]
Selinux policy fix

A fix for openstack-selinux is coming (spec file change)

Comment 7 Lon Hohberger 2014-06-17 16:57:04 UTC
*** Bug 1105850 has been marked as a duplicate of this bug. ***

Comment 9 Lon Hohberger 2014-06-17 18:23:20 UTC
Without the patch, neutron-metadata-agent runs as initrc_t.  However, I can't get /var/lib/neutron/metadata_proxy socket file to be created as initrc_t, which seems to be what the AVC is about.

Then again, I did an all-in-one installation.

Comment 10 Lon Hohberger 2014-06-17 18:56:59 UTC
So, the AVC is because neutron-metadata-proxy was run as initrc_t (due to having an on-disk bin_t label).  Thus, changing the label to neutron_exec_t (which I believe is the correct label) should resolve this.

Comment 11 Lon Hohberger 2014-06-17 18:58:05 UTC
Comment on attachment 909678 [details]
Selinux policy fix

Vs. master_contrib.

We'll do more testing in the next day or so.

Comment 12 Miroslav Grepl 2014-06-20 10:15:10 UTC
Added to Fedora. Will back port to RHEL.

Comment 13 Nir Magnezi 2014-06-22 12:09:52 UTC
Reopening
Reproduced with: 5.0-RHEL-7/2014-06-20.1 - openstack-selinux-0.5.2-2.el7ost.noarch

Issue reproduced, the neutron-metadata agent is unreachable to instances.
The log indicates the same trace mentioned in Comment #0 and in Bug #1105850 Comment #0.

ERROR neutron.agent.metadata.namespace_proxy [-] Unexpected error.
TRACE neutron.agent.metadata.namespace_proxy Traceback (most recent call last):
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/namespace_proxy.py", line 74, in __call__
TRACE neutron.agent.metadata.namespace_proxy     req.body)
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/namespace_proxy.py", line 105, in _proxy_request
TRACE neutron.agent.metadata.namespace_proxy     connection_type=UnixDomainHTTPConnection)
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1605, in request
TRACE neutron.agent.metadata.namespace_proxy     (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1353, in _request
TRACE neutron.agent.metadata.namespace_proxy     (response, content) = self._conn_request(conn, request_uri, method, body, headers)
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1327, in _conn_request
TRACE neutron.agent.metadata.namespace_proxy     conn.connect()
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/namespace_proxy.py", line 48, in connect
TRACE neutron.agent.metadata.namespace_proxy     self.sock.connect(cfg.CONF.metadata_proxy_socket)
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 190, in connect
TRACE neutron.agent.metadata.namespace_proxy     while not socket_connect(fd, address):
TRACE neutron.agent.metadata.namespace_proxy   File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 39, in socket_connect
TRACE neutron.agent.metadata.namespace_proxy     raise socket.error(err, errno.errorcode[err])
TRACE neutron.agent.metadata.namespace_proxy error: [Errno 13] EACCES
TRACE neutron.agent.metadata.namespace_proxy                               
INFO neutron.wsgi [-] 192.168.77.2 - - [22/Jun/2014 15:01:33] "GET /openstack HTTP/1.1" 500 343 0.001916

Comment 14 Nir Magnezi 2014-06-22 12:12:48 UTC
Additional Info: 
[root@puma50 neutron]# grep -i avc /var/log/messages
Jun 22 09:19:06 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=2)
Jun 22 09:19:06 puma50 dbus[1160]: avc:  received policyload notice (seqno=2)
Jun 22 09:19:19 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=3)
Jun 22 09:19:19 puma50 dbus[1160]: avc:  received policyload notice (seqno=3)
Jun 22 09:19:22 puma50 kernel: type=1107 audit(1403417962.765:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=2)
Jun 22 09:19:22 puma50 kernel: type=1107 audit(1403417962.788:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=3)
Jun 22 09:19:32 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=4)
Jun 22 09:19:32 puma50 dbus[1160]: avc:  received policyload notice (seqno=4)
Jun 22 09:19:33 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=5)
Jun 22 09:19:33 puma50 dbus[1160]: avc:  received policyload notice (seqno=5)
Jun 22 09:19:34 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=6)
Jun 22 09:19:34 puma50 dbus[1160]: avc:  received policyload notice (seqno=6)
Jun 22 09:19:36 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=7)
Jun 22 09:19:36 puma50 dbus[1160]: avc:  received policyload notice (seqno=7)
Jun 22 09:19:37 puma50 dbus-daemon: dbus[1160]: avc:  received policyload notice (seqno=8)
Jun 22 09:19:37 puma50 dbus[1160]: avc:  received policyload notice (seqno=8)
Jun 22 09:19:40 puma50 kernel: type=1107 audit(1403417980.943:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=4)
Jun 22 09:19:40 puma50 kernel: type=1107 audit(1403417980.966:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=5)
Jun 22 09:19:40 puma50 kernel: type=1107 audit(1403417980.989:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=6)
Jun 22 09:19:41 puma50 kernel: type=1107 audit(1403417981.012:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=7)
Jun 22 09:19:41 puma50 kernel: type=1107 audit(1403417981.034:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=8)


# ls -l /var/lib/neutron/metadata_proxy
srwxr-xr-x. 1 neutron neutron 0 Jun 22 09:23 /var/lib/neutron/metadata_proxy

# ls -laZ /var/lib/neutron
drwxr-xr-x. neutron neutron system_u:object_r:neutron_var_lib_t:s0 .
drwxr-xr-x. root    root    system_u:object_r:var_lib_t:s0   ..
drwxr-xr-x. neutron neutron system_u:object_r:neutron_var_lib_t:s0 dhcp
drwxr-xr-x. neutron neutron system_u:object_r:neutron_var_lib_t:s0 external
drwxr-xr-x. neutron neutron system_u:object_r:neutron_var_lib_t:s0 lock
srwxr-xr-x. neutron neutron system_u:object_r:neutron_var_lib_t:s0 metadata_proxy

Comment 15 Nir Magnezi 2014-06-22 12:13:46 UTC
Created attachment 911135 [details]
the output of: ps -efZ

Comment 16 Nir Magnezi 2014-06-22 12:17:16 UTC
When I switch off SELinux, instances are able to reach the neutron metadata service

Comment 17 Ofer Blaut 2014-06-23 04:16:34 UTC
From audit.log

type=AVC msg=audit(1403470895.273:3249): avc:  denied  { connectto } for  pid=17966 comm="neutron-ns-meta" path="/var/lib/neutron/metadata_proxy" scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:system_r:neutron_t:s0 tclass=unix_stream_socket
type=AVC msg=audit(1403470895.273:3250): avc:  denied  { connectto } for  pid=17966 comm="neutron-ns-meta" path="/var/lib/neutron/metadata_proxy" scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:system_r:neutron_t:s0 tclass=unix_stream_socket

Comment 18 Miroslav Grepl 2014-06-23 07:48:14 UTC
commit 907ac5e4c2399491466c35c2676b918be85f4786
Author: Miroslav Grepl <mgrepl>
Date:   Mon Jun 23 09:47:40 2014 +0200

    Allow neutron-ns-metadata to connectto own unix stream socke

Comment 29 yfried 2014-06-30 14:52:56 UTC
Created attachment 913437 [details]
/var/log/audit/audit.log enforcing mode

Comment 30 yfried 2014-06-30 14:54:24 UTC
Created attachment 913439 [details]
/var/log/audit/audit.log permissive mode

Comment 31 Miroslav Grepl 2014-06-30 14:57:13 UTC
Ok this is RHEL7 where we don't have actual fixes which we have in Fedora.

Comment 32 Ryan Hallisey 2014-06-30 17:22:07 UTC
corenet_tcp_connect_all_ports(neutron_t)

Should cover everything.  Will be adding to new build.

Comment 33 Lon Hohberger 2014-07-01 14:47:20 UTC
The os-neutron wasn't being installed by openstack-selinux.  Easyfix.

Comment 34 Ofer Blaut 2014-07-01 20:37:09 UTC
still not working

rt_t:s0 tclass=tcp_socket
type=AVC msg=audit(1404246480.745:10852): avc:  denied  { name_connect } for  pid=8840 comm="neutron-metadat" dest=9696 scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:object_r:neutron_port_t:s0 tclass=tcp_socket
type=AVC msg=audit(1404246480.816:10853): avc:  denied  { name_connect } for  pid=8840 comm="neutron-metadat" dest=9696 scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:object_r:neutron_port_t:s0 tclass=tcp_socket
[root@puma04 ~(keystone_admin_tenant1)]$rpm -qa | grep selinux
selinux-policy-3.12.1-153.el7_0.10.noarch
libselinux-2.2.2-6.el7.x86_64
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-ruby-2.2.2-6.el7.x86_64
openstack-selinux-0.5.7-1.el7ost.noarch
selinux-policy-targeted-3.12.1-153.el7_0.10.noarch
libselinux-python-2.2.2-6.el7.x86_64

Comment 35 Ofer Blaut 2014-07-02 12:43:31 UTC
Verified - openstack-selinux-0.5.9-1.el7ost.noarch

Comment 37 errata-xmlrpc 2014-07-08 15:15:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0845.html