Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1112968

Summary: neutron-openvswitch-agent exits with 1 on SIGTERM
Product: Red Hat OpenStack Reporter: Ofer Blaut <oblaut>
Component: openstack-neutronAssignee: Jakub Libosvar <jlibosva>
Status: CLOSED ERRATA QA Contact: yfried
Severity: high Docs Contact:
Priority: high    
Version: 5.0 (RHEL 7)CC: chrisw, ihrachys, jlibosva, lpeer, nyechiel, oblaut, yeylon
Target Milestone: rc   
Target Release: 5.0 (RHEL 7)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-neutron-2014.1-36.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1113953 (view as bug list) Environment:
Last Closed: 2014-07-24 17:24:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1063427, 1113953    
Attachments:
Description Flags
systemctl output
none
verification output none

Description Ofer Blaut 2014-06-25 07:11:25 UTC
Created attachment 911936 [details]
systemctl output

Description of problem:

systemctl Fail to  stop neutron-openvswitch-agent  

Have tested on compute/networker 

Version-Release number of selected component (if applicable):

openstack-neutron-2014.1-34.el7ost.noarch
How reproducible:


Steps to Reproduce:
1.systemctl stop neutron-openvswitch-agent 
2.systemctl status neutron-openvswitch-agent 
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jakub Libosvar 2014-06-25 10:41:51 UTC
IIUC this bug is about that openvswitch-agent returns 1 on SIGTERM.

Comment 2 Ofer Blaut 2014-06-25 12:25:12 UTC
Will the return code will impact HA ? PaceMaker ?

Comment 3 lpeer 2014-06-25 12:27:13 UTC
This could be problematic for the HA support. We'll need to address that before releasing HA.

Comment 4 Ihar Hrachyshka 2014-06-25 12:29:04 UTC
From systemd.service(5):

       SuccessExitStatus=
           Takes a list of exit status definitions that when returned by the
           main service process will be considered successful termination, in
           addition to the normal successful exit code 0 and the signals
           SIGHUP, SIGINT, SIGTERM and SIGPIPE. Exit status definitions can
           either be numeric exit codes or termination signal names, separated
           by spaces. Example: "SuccessExitStatus=1 2 8 SIGKILL", ensures that
           exit codes 1, 2, 8 and the termination signal SIGKILL are
           considered clean service terminations. This option may appear more
           than once in which case the list of successful exit statuses is
           merged. If the empty string is assigned to this option the list is
           reset, all prior assignments of this option will have no effect.


So we may patch that in dist-git without touching Neutron code itself. Anyway, it's wrong that the agent returns 1 on successful exit, so this should also be patched in u/s.

Comment 5 Jakub Libosvar 2014-06-25 12:43:43 UTC
(In reply to Ihar Hrachyshka from comment #4)
> From systemd.service(5):
> 
>        SuccessExitStatus=
This is very dangerous. In case there is error in config file, ovs-agent exits with 1. Same applies when tunneling cannot be set. systemd would report success even though agent failed to start.

Comment 6 Ihar Hrachyshka 2014-06-25 12:58:55 UTC
Fair enough. Reporting a failure on success is not any better than reporting success on failure.

Comment 9 yfried 2014-07-16 09:38:03 UTC
Created attachment 918363 [details]
verification output

On RHLE7
[root@puma46 ~]# rpm -qa | grep neutron
python-neutronclient-2.3.4-2.el7ost.noarch
python-neutron-2014.1.1-2.el7ost.noarch
openstack-neutron-2014.1.1-2.el7ost.noarch
openstack-neutron-openvswitch-2014.1.1-2.el7ost.noarch


[root@puma46 ~]# systemctl stop neutron-openvswitch-agent.service 
[root@puma46 ~]# echo $?
0

Comment 12 errata-xmlrpc 2014-07-24 17:24:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0936.html