Bug 1122525 - docker0 and virtbr0 always connecting and keep NM status icon rotating
Summary: docker0 and virtbr0 always connecting and keep NM status icon rotating
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: NetworkManager
Version: 21
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Lubomir Rintel
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-23 12:38 UTC by Jens Petersen
Modified: 2015-12-02 16:10 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-02 03:17:28 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
GNOME Bugzilla 731014 0 None None None Never

Description Jens Petersen 2014-07-23 12:38:16 UTC
Description of problem:
For a while now since I started using docker on my work and home
laptops, the NM applet always shows "connecting" status.

Version-Release number of selected component (if applicable):
NetworkManager-0.9.9.1-4.git20140319.fc20

How reproducible:
100%

Steps to Reproduce:
1. install and use docker on F20
2. run "nmcli dev"

Actual results:
1. Causes NM applet to start rotating.
2. 
DEVICE      TYPE      STATE                                  CONNECTION         
em1         ethernet  connected                              Wired connection 1 
docker0     bridge    connecting (getting IP configuration)  docker0            
virbr0      bridge    connecting (getting IP configuration)  virbr0             
wlp3s0      wifi      unavailable                            --                 
lo          loopback  unmanaged                              --                 
virbr0-nic  tap       unmanaged                              --                 

Expected results:
Bridges to connect and NM not to have bridges in connecting state
with applet rotating continuously.

Additional info:
Initially when I started using docker the applet
would only rotate when a docker image/container
was running - now it seems to rotate all the time.
Before docker I never experienced this problem
with virt-manager/libvirt.

Running: "nmcli dev disconnect docker0; nmcli dev disconnect virbr0"
stops the applet rotation but that is not really a workaround. :)

Comment 1 Jirka Klimes 2014-07-25 09:20:10 UTC
The bridges are not connected because they probably don't have any ports (slaves). So you should make sure that there are some slaves connections attached to the bridges, or else the disconnection (disabling) of the bridges is a workaround because they are not used ;)

Btw, virbr0-nic that should be attached to virbr0-nic is not managed by some reason.

Would you attach journal log or /var/log/messages so that we can NM logs?
journalctl -u NetworkManager

Comment 2 Jens Petersen 2014-09-17 08:07:32 UTC
Sorry for taking so long to respond... I don't use docker too often:
I find the UI slightly annoying to be honest.

I think this is the relevant log from the journal:

Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): carrier is OFF
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): new Veth device (driver: 'unknown' ifindex: 18)
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): exported as /org/freedesktop/NetworkManager/Devices/15
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): Generating connection from current device status.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): Using generated connection: 'veth6ba3'
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): device state change: unmanaged -> unavailable (reason 'connection-assumed') [10 20 41]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): bringing up device.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Added default wired connection 'Wired connection 2' for /virtual/device/placeholder/12
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): link connected
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): carrier is ON
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): new Veth device (driver: 'unknown' ifindex: 19)
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): exported as /org/freedesktop/NetworkManager/Devices/16
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): Generating connection from current device status.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): Using generated connection: 'veth9338'
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): device state change: unmanaged -> unavailable (reason 'connection-assumed') [10 20 41]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Added default wired connection 'Wired connection 5' for /virtual/device/placeholder/13
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <warn> (veth6ba3): failed to get device's ifindex
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): link disconnected
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): link connected
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth6ba3): device state change: unavailable -> unmanaged (reason 'removed') [20 10 36]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Auto-activating connection 'Wired connection 5'.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) starting connection 'Wired connection 5'
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): device state change: disconnected -> prepare (reason 'none') [30 40 0]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 1 of 5 (Device Prepare) scheduled...
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 1 of 5 (Device Prepare) started...
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 2 of 5 (Device Configure) scheduled...
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 1 of 5 (Device Prepare) complete.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 2 of 5 (Device Configure) starting...
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): device state change: prepare -> config (reason 'none') [40 50 0]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 2 of 5 (Device Configure) successful.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 3 of 5 (IP Configure Start) scheduled.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 2 of 5 (Device Configure) complete.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 3 of 5 (IP Configure Start) started...
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): device state change: config -> ip-config (reason 'none') [50 70 0]
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Beginning DHCPv4 transaction (timeout in 45 seconds)
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> dhclient started with pid 17191
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: nm_utils_get_ip_config_method: assertion 's_ip6 != NULL' failed
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 3 of 5 (IP Configure Start) complete.
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: Internet Systems Consortium DHCP Client 4.2.7
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: Copyright 2004-2014 Internet Systems Consortium.
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: All rights reserved.
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: For info, please visit https://www.isc.org/software/dhcp/
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: 
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: Internet Systems Consortium DHCP Client 4.2.7
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: Copyright 2004-2014 Internet Systems Consortium.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: All rights reserved.
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: For info, please visit https://www.isc.org/software/dhcp/
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: <info> (veth9338): DHCPv4 state changed nbi -> preinit
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: Listening on LPF/veth9338/0e:53:95:81:f8:08
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: Sending on   LPF/veth9338/0e:53:95:81:f8:08
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: Sending on   Socket/fallback
Sep 17 16:49:50 localhost.localdomain dhclient[17191]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 7 (xid=0x42b2c97)
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: Listening on LPF/veth9338/0e:53:95:81:f8:08
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: Sending on   LPF/veth9338/0e:53:95:81:f8:08
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: Sending on   Socket/fallback
Sep 17 16:49:50 localhost.localdomain NetworkManager[735]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 7 (xid=0x42b2c97)
Sep 17 16:49:51 localhost.localdomain NetworkManager[735]: nm_utils_get_ip_config_method: assertion 's_ip6 != NULL' failed
Sep 17 16:49:57 localhost.localdomain dhclient[17191]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 13 (xid=0x42b2c97)
Sep 17 16:49:57 localhost.localdomain NetworkManager[735]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 13 (xid=0x42b2c97)
Sep 17 16:50:10 localhost.localdomain dhclient[17191]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 11 (xid=0x42b2c97)
Sep 17 16:50:10 localhost.localdomain NetworkManager[735]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 11 (xid=0x42b2c97)
Sep 17 16:50:21 localhost.localdomain dhclient[17191]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 12 (xid=0x42b2c97)
Sep 17 16:50:21 localhost.localdomain NetworkManager[735]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 12 (xid=0x42b2c97)
Sep 17 16:50:33 localhost.localdomain dhclient[17191]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 17 (xid=0x42b2c97)
Sep 17 16:50:33 localhost.localdomain NetworkManager[735]: DHCPDISCOVER on veth9338 to 255.255.255.255 port 67 interval 17 (xid=0x42b2c97)
Sep 17 16:50:36 localhost.localdomain NetworkManager[735]: <warn> (veth9338): DHCPv4 request timed out.
Sep 17 16:50:36 localhost.localdomain NetworkManager[735]: <info> (veth9338): canceled DHCP transaction, DHCP client pid 17191
Sep 17 16:50:36 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 4 of 5 (IPv4 Configure Timeout) scheduled...
Sep 17 16:50:36 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 4 of 5 (IPv4 Configure Timeout) started...
Sep 17 16:50:36 localhost.localdomain NetworkManager[735]: <info> Activation (veth9338) Stage 4 of 5 (IPv4 Configure Timeout) complete.

Let me know if I can provide any other info.

Comment 3 Jyri-Petteri Paloposki 2015-05-19 21:37:19 UTC
This is probably about this upstream bug: https://bugzilla.gnome.org/show_bug.cgi?id=731014  There seems to be a fix already, would it be possible to pull the fix to the Fedora package quickly? This is quite an annoying problem for VirtualBox / Docker / ... users.

I think that bug 1075232 is also about the same problem.

Comment 4 Fedora End Of Life 2015-05-29 12:27:05 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 5 Fedora Admin XMLRPC Client 2015-08-18 14:57:29 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 6 Jens Petersen 2015-10-05 17:31:17 UTC
This looks fixed in F23 at least - will try to test latest F22 later.

Comment 7 Jens Petersen 2015-10-07 05:58:21 UTC
F22 also seems okay from my testing so far.
I wasn't able to get docker to run in my F21 guest.

Comment 8 Fedora End Of Life 2015-11-04 10:17:28 UTC
This message is a reminder that Fedora 21 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 21. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '21'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 21 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 9 Fedora End Of Life 2015-12-02 03:17:32 UTC
Fedora 21 changed to end-of-life (EOL) status on 2015-12-01. Fedora 21 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.