Bug 1414547

Summary: Neutron openvswitch agent is not running
Product: Red Hat OpenStack Reporter: Dimitri Savineau <dsavinea>
Component: rhosp-directorAssignee: Angus Thomas <athomas>
Status: CLOSED DUPLICATE QA Contact: Omri Hochman <ohochman>
Severity: high Docs Contact:
Priority: high    
Version: 11.0 (Ocata)CC: dbecker, mburns, morazi, rhel-osp-director-maint
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-01-23 15:19:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
openvswitch-agent.log none

Description Dimitri Savineau 2017-01-18 19:57:31 UTC
Created attachment 1242274 [details]
openvswitch-agent.log

Description of problem:
Neutron openvswitch agent is not running after the deployment on the controller and compute nodes.

Version-Release number of selected component (if applicable):
OSP11 puddle 2017-01-18.1

* Undercloud
openstack-tripleo-heat-templates-6.0.0-0.20170116025719.fa45e05.el7ost.noarch
python-tripleoclient-5.7.1-0.20170110155651.bfe8040.el7ost.noarch
* Overcloud
puppet-neutron-10.1.0-0.20170114065523.cd6394a.el7ost.noarch
python-neutron-10.0.0-0.20170116032457.e74b45d.el7ost.noarch
python-neutronclient-6.0.0-0.20161205101534.f53d624.el7ost.noarch
openstack-neutron-10.0.0-0.20170116032457.e74b45d.el7ost.noarch
openstack-neutron-openvswitch-10.0.0-0.20170116032457.e74b45d.el7ost.noarch
openstack-neutron-common-10.0.0-0.20170116032457.e74b45d.el7ost.noarch
openstack-neutron-ml2-10.0.0-0.20170116032457.e74b45d.el7ost.noarch
python-neutron-lib-1.0.0-0.20161108104854.efd7a3a.el7ost.noarch
openvswitch-2.5.0-22.git20160727.el7fdp.x86_64
python-ryu-common-4.3-2.el7ost.noarch
python-ryu-4.3-2.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Deploy overcloud nodes (default configuration)

Actual results:
neutron-openvswitch-agent is not running on controller and compute nodes.

Expected results:
neutron-openvswitch-agent should be running

Additional info:

During the deployment puppet starts the service :
Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Service[neutron-ovs-agent-service]/ensure: ensure changed 'stopped' to 'running

This one fails but exit 0

# systemctl status neutron-openvswitch-agent
● neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2017-01-18 14:04:44 EST; 8min ago
  Process: 641225 ExecStart=/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent --log-file /var/log/neutron/openvswitch-agent.log (code=exited, status=0/SUCCESS)
  Process: 641199 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
 Main PID: 641225 (code=exited, status=0/SUCCESS)

# journalctl -u neutron-openvswitch-agent
Jan 18 14:04:24 overcloud-controller-0.localdomain systemd[1]: Starting OpenStack Neutron Open vSwitch Agent...
Jan 18 14:04:24 overcloud-controller-0.localdomain neutron-enable-bridge-firewall.sh[641199]: net.bridge.bridge-nf-call-arptables = 1
Jan 18 14:04:24 overcloud-controller-0.localdomain neutron-enable-bridge-firewall.sh[641199]: net.bridge.bridge-nf-call-iptables = 1
Jan 18 14:04:24 overcloud-controller-0.localdomain neutron-enable-bridge-firewall.sh[641199]: net.bridge.bridge-nf-call-ip6tables = 1
Jan 18 14:04:24 overcloud-controller-0.localdomain systemd[1]: Started OpenStack Neutron Open vSwitch Agent.
Jan 18 14:04:24 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate repor
Jan 18 14:04:25 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
Jan 18 14:04:25 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: Option "rpc_backend" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
Jan 18 14:04:25 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".
Jan 18 14:04:25 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: Could not load neutron.openstack.common.notifier.rpc_notifier
Jan 18 14:04:33 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: /usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py:200: FutureWarning: The access_policy argument is changing its default value to <class 'oslo_messaging.rpc.dispatcher.Defaul
Jan 18 14:04:33 overcloud-controller-0.localdomain neutron-openvswitch-agent[641225]: access_policy)

In the neutron openvswitch agent log we can see a error related to ryu (see attachment)

Comment 1 Dimitri Savineau 2017-01-19 14:07:41 UTC
reproduced with puddle 2017-01-18.5

Comment 2 Dimitri Savineau 2017-01-20 19:37:08 UTC
reproduced with puddle 2017-01-19.2

It seems to come from the version of python-ryu since [1] that requires ryu 4.7

# rpm -qa python-ryu
python-ryu-4.3-2.el7ost.noarch

Additionaly neutron updates the requirements on this lib 1 month ago and now it requires ryu >= 4.9 [2]

[1] https://github.com/openstack/neutron/commit/92199dbd83a1a5a9c02cb2d9e2651d3f285b611e
[2] https://github.com/openstack/neutron/blob/master/requirements.txt#L22

Comment 3 Dimitri Savineau 2017-01-23 15:19:00 UTC

*** This bug has been marked as a duplicate of bug 1415645 ***