Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1114254

Summary: neutron fails to create a vip
Product: Red Hat OpenStack Reporter: Amit Ugol <augol>
Component: openstack-selinuxAssignee: Lon Hohberger <lhh>
Status: CLOSED ERRATA QA Contact: Nir Magnezi <nmagnezi>
Severity: medium Docs Contact:
Priority: medium    
Version: 5.0 (RHEL 7)CC: aberezin, lhh, mgrepl, nyechiel, oblaut, rhallise, yeylon
Target Milestone: rc   
Target Release: 5.0 (RHEL 7)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-selinux-0.5.9-1.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-07-08 15:16:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
audit.log from cloned bz 1114257
none
audit log with selinux-0.5.8-1
none
audit2why none

Description Amit Ugol 2014-06-29 09:01:18 UTC
Description of problem:
creation of VIP fails with the following error:
2014-06-29 10:08:58.316 25588 ERROR neutron.services.loadbalancer.agent.agent_manager [req-9a211870-52b5-499f-82d7-25873c66f933 None] Create vip c21fef4a-1bc5-4c64-baa2-988b8db22cd3 failed on device driver haproxy_ns
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call last):
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/agent/agent_manager.py", line 216, in create_vip
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     driver.create_vip(vip)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 289, in create_vip
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     self._refresh_device(vip['pool_id'])
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 286, in _refresh_device
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     self.deploy_instance(logical_config)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     return f(*args, **kwargs)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 282, in deploy_instance
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     self.create(logical_config)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 87, in create
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     self._spawn(logical_config)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 110, in _spawn
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     ns.netns.execute(cmd)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 466, in execute
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     check_exit_code=check_exit_code)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 76, in execute
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager     raise RuntimeError(m)
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager RuntimeError: 
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qlbaas-edd252f8-46f8-4139-bcd8-74fbe7ec5168', 'haproxy', '-f', '/var/lib/neutron/lbaas/edd252f8-46f8-4139-bcd8-74fbe7ec5168/conf', '-p', '/var/lib/neutron/lbaas/edd252f8-46f8-4139-bcd8-74fbe7ec5168/pid']
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager Exit code: 96
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager Stdout: ''
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager Stderr: '/usr/bin/neutron-rootwrap: Executable not found: haproxy (filter match = haproxy)\n'
2014-06-29 10:08:58.316 25588 TRACE neutron.services.loadbalancer.agent.agent_manager

the last stderr: Executable not found: haproxy (filter match = haproxy) although:
# which haproxy
/usr/sbin/haproxy

and:
# openstack-status | grep lb
neutron-lbaas-agent:                    active

setenforce 0 solves the issue here. audit logs attached.

Version-Release number of selected component (if applicable):


How reproducible:
Always 

Steps to Reproduce:
1. try to define a vip while seliux is preventive.
2.
3.

Actual results:
see above


Expected results:


Additional info:
tested with:
openstack-selinux-0.5.5-3.el7ost.noarch
selinux-policy-3.12.1-153.el7_0.10.noarch
puddle:
http://download.lab.bos.redhat.com/rel-eng/OpenStack/5.0-RHEL-7/2014-06-27.1/

Comment 2 Amit Ugol 2014-06-30 05:13:50 UTC
*** Bug 1114257 has been marked as a duplicate of this bug. ***

Comment 4 Amit Ugol 2014-06-30 08:01:20 UTC
Its rather big so I just uploaded the formatter output. Do you want the entire thing ?

Comment 5 Miroslav Grepl 2014-06-30 10:09:56 UTC
We need to see raw AVC msgs.

Comment 6 Ryan Hallisey 2014-06-30 17:29:17 UTC
Created attachment 913500 [details]
audit.log from cloned bz 1114257

Comment 7 Ryan Hallisey 2014-06-30 17:32:01 UTC
setsebool -P daemons_enable_cluster_mode on

allow neutron_t haproxy_exec_t:file { read execute open execute_no_trans };

Will be added to the new build.

Comment 9 Miroslav Grepl 2014-07-01 07:48:31 UTC
Ryan,
I added to Fedora

optional_policy(`
 domtrans_pattern(neutron_t, haproxy_exec_t, haproxy_t)
')

Comment 11 Ofer Blaut 2014-07-01 21:38:27 UTC
Created attachment 913934 [details]
audit log with selinux-0.5.8-1

Comment 12 Ofer Blaut 2014-07-01 21:39:45 UTC
Created attachment 913935 [details]
audit2why

Comment 13 Lon Hohberger 2014-07-01 22:06:19 UTC
In the optional policy, if we do a domain transition, we need:

	manage_files_pattern(haproxy_t, neutron_var_lib_t, neutron_var_lib_t)
	manage_sock_files_pattern(haproxy_t, neutron_var_lib_t, neutron_var_lib_t)

Comment 14 Miroslav Grepl 2014-07-02 12:47:07 UTC
(In reply to Lon Hohberger from comment #13)
> In the optional policy, if we do a domain transition, we need:
> 
> 	manage_files_pattern(haproxy_t, neutron_var_lib_t, neutron_var_lib_t)
> 	manage_sock_files_pattern(haproxy_t, neutron_var_lib_t, neutron_var_lib_t)

Ok. Without a transition what we get haproxy running as neutron_t. Try to re-test and execute

# ps -eZ |grep haproxy

without the transition rules.

Comment 15 Nir Magnezi 2014-07-07 09:00:36 UTC
Verified NVR: openstack-selinux-0.5.9-1.el7ost.noarch

VIP tested ok with SELinux enforcing, Yet I still see some AVC denied messages.
See Bug #1116755

Comment 17 errata-xmlrpc 2014-07-08 15:16:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0845.html