Bug 1006412

Summary: openvswitch fails to start
Product: [Fedora] Fedora Reporter: Jakub Ruzicka <jruzicka>
Component: openvswitchAssignee: Flavio Leitner <fleitner>
Status: CLOSED ERRATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 19CC: chrisw, fleitner, gdubreui, lars, markmc, rjones, shyu, tgraf
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openvswitch-2.0.0-4.fc19 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-25 02:21:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
fix to use systemctl instead of the file lock in subsys directory none

Description Jakub Ruzicka 2013-09-10 15:01:23 UTC
Description of problem:
After installing openvswitch on clean updated Fedora 19 VM, openvswitch service fails to start.

Version-Release number of selected component (if applicable):
openvswitch-1.10.0-7.fc19.x86_64

Steps to Reproduce:
1. yum install -y openvswitch
2. systemctl start openvswitch.service

Actual results:
A dependency job for openvswitch.service failed. See 'journalctl -xn' for details.

# systemctl status openvswitch.service
openvswitch.service - Open vSwitch Unit
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled)
   Active: inactive (dead)

zář 10 10:55:32 machine systemd[1]: Dependency failed for Open vSwitch Unit.


Expected results:
Service starts.

Additional info:
This can be fixed by

# mkdir /var/lock/subsys

While I was debugging this, I found out some error about rm failing on the file in that (nonexisting) directory, but I don't remember where exactly it was.

Comment 1 Gilles Dubreuil 2013-09-16 12:29:25 UTC
The workaround doesn't work for me, the /var/lock/subsys was already in place.

/var/log/messages shows:
Sep 16 08:19:20 host14 systemd[1]: Starting OpenStack Quantum Open vSwitch Cleanup Utility...
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: Traceback (most recent call last):
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/bin/neutron-ovs-cleanup", line 6, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from neutron.agent.ovs_cleanup_util import main
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/lib/python2.7/site-packages/neutron/agent/ovs_cleanup_util.py", line 20, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from neutron.agent.common import config as agent_config
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/lib/python2.7/site-packages/neutron/agent/common/config.py", line 22, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from neutron.common import config
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/lib/python2.7/site-packages/neutron/common/config.py", line 27, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from neutron.api.v2 import attributes
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/lib/python2.7/site-packages/neutron/api/v2/attributes.py", line 23, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from neutron.openstack.common import log as logging
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/lib/python2.7/site-packages/neutron/openstack/common/log.py", line 45, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from neutron.openstack.common.gettextutils import _
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: File "/usr/lib/python2.7/site-packages/neutron/openstack/common/gettextutils.py", line 34, in <module>
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: from babel import localedata
Sep 16 08:19:20 host14 neutron-ovs-cleanup[24014]: ImportError: No module named babel
Sep 16 08:19:20 host14 systemd[1]: neutron-ovs-cleanup.service: main process exited, code=exited, status=1/FAILURE
Sep 16 08:19:20 host14 systemd[1]: Failed to start OpenStack Quantum Open vSwitch Cleanup Utility.

Comment 2 Gilles Dubreuil 2013-09-16 12:54:05 UTC
My workaround was to install python-babel:

# yum -y install python-babel

Comment 3 Flavio Leitner 2013-09-20 03:53:28 UTC
(In reply to Jakub Ruzicka from comment #0)
> Additional info:
> This can be fixed by
> 
> # mkdir /var/lock/subsys

Yeah, I could see it here as well. For some reason another package creates /var/lock/subsys, so I didn't spot this before. It's interesting that subsys has no owner to find which package creates it.

$ rpm -qf /var/lock/subsys
file /var/lock/subsys is not owned by any package

Anyway, we create the lock file there just to sync with ifup-ovs and ifdown-ovs scripts.  We can easily change that to use systemctl and fall back to subsys otherwise.

> While I was debugging this, I found out some error about rm failing on the
> file in that (nonexisting) directory, but I don't remember where exactly it
> was.

When you stop the service, it will try to remove the lock file in subsys which doesn't exist, so it fails.

Comment 4 Flavio Leitner 2013-09-20 04:01:36 UTC
(In reply to Gilles Dubreuil from comment #1)
> The workaround doesn't work for me, the /var/lock/subsys was already in
> place.
> 
> /var/log/messages shows:
> Sep 16 08:19:20 host14 systemd[1]: Starting OpenStack Quantum Open vSwitch
> Cleanup Utility...

Hi Gilles, your report is about another issue, so the work around clearly won't work for you.

It's also from another component - openstack-neutron - can I ask you to open a new bug for that component requesting to add the missing dependency?
Thanks a lot.

Comment 5 Flavio Leitner 2013-09-20 05:16:35 UTC
Created attachment 800293 [details]
fix to use systemctl instead of the file lock in subsys directory

Comment 6 Gilles Dubreuil 2013-09-23 02:55:35 UTC
Flavio,

You're right, it's a separate issue. 

Cheers

Comment 7 Fedora Update System 2013-10-01 13:54:52 UTC
openvswitch-1.11.0-3.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/openvswitch-1.11.0-3.fc20

Comment 8 Fedora Update System 2013-10-01 14:48:15 UTC
openvswitch-1.11.0-3.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/openvswitch-1.11.0-3.fc19

Comment 9 Fedora Update System 2013-10-01 17:48:54 UTC
openvswitch-1.11.0-3.fc18 has been submitted as an update for Fedora 18.
https://admin.fedoraproject.org/updates/openvswitch-1.11.0-3.fc18

Comment 10 Fedora Update System 2013-10-02 06:32:45 UTC
Package openvswitch-1.11.0-3.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing openvswitch-1.11.0-3.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-18072/openvswitch-1.11.0-3.fc19
then log in and leave karma (feedback).

Comment 11 Shanzhi Yu 2013-10-20 14:33:54 UTC
I met this bug during I install rdo-release-havana on an VM(fedora 19) with package:openvswitch-1.11.0-1.fc19.x86_64
I update openvswitch to 1.11.0-3.fc19.x86_64 still met this bug.
Steps:
1. update openvswitch
# rpm -q openvswitch
openvswitch-1.11.0-3.fc19.x86_64
2. start openvswitch
# service openvswitch restart
Redirecting to /bin/systemctl restart  openvswitch.service
A dependency job for openvswitch.service failed. See 'journalctl -xn' for details.

while i disable selinux, the openvswitch can be started.

3. Disable selinux
# getenforce 
Enforcing
# setenforce 0
4.service openvswitch restart
Redirecting to /bin/systemctl restart  openvswitch.service

Comment 12 Thomas Graf 2013-10-21 08:33:15 UTC
(In reply to Shanzhi Yu from comment #11)
> I met this bug during I install rdo-release-havana on an VM(fedora 19) with
> package:openvswitch-1.11.0-1.fc19.x86_64
> I update openvswitch to 1.11.0-3.fc19.x86_64 still met this bug.
> Steps:
> 1. update openvswitch
> # rpm -q openvswitch
> openvswitch-1.11.0-3.fc19.x86_64
> 2. start openvswitch
> # service openvswitch restart
> Redirecting to /bin/systemctl restart  openvswitch.service
> A dependency job for openvswitch.service failed. See 'journalctl -xn' for
> details.
> 
> while i disable selinux, the openvswitch can be started.
> 
> 3. Disable selinux
> # getenforce 
> Enforcing
> # setenforce 0
> 4.service openvswitch restart
> Redirecting to /bin/systemctl restart  openvswitch.service

Upgrading to the latest selinux policy package should resolve this issue. The selinux policy is distributed through a separate package and not included in the openvswitch package.

Comment 13 Fedora Update System 2013-10-29 00:24:41 UTC
openvswitch-2.0.0-1.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/openvswitch-2.0.0-1.fc20

Comment 14 Fedora Update System 2013-11-08 13:36:56 UTC
openvswitch-2.0.0-1.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/openvswitch-2.0.0-1.fc19

Comment 15 Fedora Update System 2013-11-10 06:47:21 UTC
openvswitch-2.0.0-1.fc20 has been pushed to the Fedora 20 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 16 Richard W.M. Jones 2013-12-12 16:46:24 UTC
I have:

openvswitch-1.11.0-1.fc19.x86_64
selinux-policy-3.12.1-74.15.fc19.noarch

so this is supposed to work, right?  Because it does not.  I
had to create /var/lock/subsys by hand in order to fix it.

Therefore setting this bug back to ASSIGNED because it's not fixed.

Comment 17 Lars Kellogg-Stedman 2014-01-09 15:13:58 UTC
I have selinux-policy-targeted-3.12.1-54.fc19.noarch on Fedora 20.  Attempting to start openvswitch results in the following AVC messages:

type=AVC msg=audit(1389279739.863:7): avc:  denied  { write } for  pid=264 comm="ovsdb-server" name="tmp" dev="vda1" ino=15654 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:tmp_t:s0 tclass=dir
type=AVC msg=audit(1389279739.863:7): avc:  denied  { add_name } for  pid=264 comm="ovsdb-server" name="tmpflVWkpJ" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:tmp_t:s0 tclass=dir
type=AVC msg=audit(1389279739.863:7): avc:  denied  { create } for  pid=264 comm="ovsdb-server" name="tmpflVWkpJ" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:tmp_t:s0 tclass=file
type=AVC msg=audit(1389279739.863:7): avc:  denied  { write open } for  pid=264 comm="ovsdb-server" path="/tmp/tmpflVWkpJ" dev="vda1" ino=1478 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:tmp_t:s0 tclass=file
type=AVC msg=audit(1389279739.863:8): avc:  denied  { remove_name } for  pid=264 comm="ovsdb-server" name="tmpflVWkpJ" dev="vda1" ino=1478 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:tmp_t:s0 tclass=dir
type=AVC msg=audit(1389279739.863:8): avc:  denied  { unlink } for  pid=264 comm="ovsdb-server" name="tmpflVWkpJ" dev="vda1" ino=1478 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:tmp_t:s0 tclass=file

And audit2allow gives:

#============= openvswitch_t ==============
allow openvswitch_t tmp_t:dir { write remove_name add_name };
allow openvswitch_t tmp_t:file { write create unlink open };

With this in place, openvswitch starts.

Comment 18 Fedora Update System 2014-01-15 22:27:33 UTC
openvswitch-2.0.0-4.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/openvswitch-2.0.0-4.fc19

Comment 19 Fedora Update System 2014-01-17 05:52:32 UTC
Package openvswitch-2.0.0-4.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing openvswitch-2.0.0-4.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2014-1014/openvswitch-2.0.0-4.fc19
then log in and leave karma (feedback).

Comment 20 Fedora Update System 2014-01-25 02:21:15 UTC
openvswitch-2.0.0-4.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.