Bug 1022042 - Unable to restart libvirt virtual network when the guest is up and running.
Unable to restart libvirt virtual network when the guest is up and running.
Status: CLOSED DUPLICATE of bug 1014554
Product: Virtualization Tools
Classification: Community
Component: libvirt (Show other bugs)
unspecified
All Linux
unspecified Severity high
: ---
: ---
Assigned To: Libvirt Maintainers
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-22 10:25 EDT by chandrashekar shastri
Modified: 2016-04-10 11:08 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-04-10 11:08:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description chandrashekar shastri 2013-10-22 10:25:51 EDT
Unable to restart libvirt virtual network when the guest is up and running.

Steps to Reproduce:

1. Define the network interface virbr1
2. Boot the guest with that interface
3. Login to the guest 
4. Ping the IP from guest to host [virbr1 IP]
5. Destroy the virbr1
6. Start the virbr1
7. Try to ping the IP from the guest to the host [virbr1 IP]
8. Run ifconfig virbr1 and observe that it is not running "RUNNING".

Note: The workaround is to reboot the guest and bringup the virtual network virbr1.

virsh net-destroy virbr1
Network virbr1 destroyed

virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     no            yes
 virbr1               inactive   no            yes


virsh net-start virbr1
Network virbr1 started


ifconfig virbr1
virbr1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.168.122.1  netmask 255.255.255.0  broadcast 172.168.122.255
        ether 52:54:00:16:84:a8  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 1177 (1.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

From the above the interface is not "RUNNING" and when I login to the guest I am unable to ping the ip (172.168.122.1).

[root@ltczhyp2 ~]# virsh destroy vm5
Domain vm5 destroyed

[root@ltczhyp2 ~]# virsh start vm5
Domain vm5 started

ifconfig virbr1
virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.168.122.1  netmask 255.255.255.0  broadcast 172.168.122.255
        ether 52:54:00:16:84:a8  txqueuelen 0  (Ethernet)
        RX packets 4  bytes 524 (524.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 1261 (1.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

When the guest boots up it will bring the virbr1 network and the ifconfig shows as "RUNNING". 

Ideally the interface can be restarted irrespective of whether it is assigned to the guest or the guest is running or destroyed. 

virsh net-dumpxml virbr1
<network connections='1'>
  <name>virbr1</name>
  <uuid>03460041-fac6-4c98-851b-e794716fa27d</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:16:84:a8'/>
  <ip address='172.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='172.168.122.2' end='172.168.122.254'/>
    </dhcp>
  </ip>
</network>

virsh dumpxml vm5
<domain type='kvm' id='32'>
  <name>vm5</name>
  <uuid>a3850743-fab5-49ac-ab2d-8df128f56c9e</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='s390x' machine='s390-ccw-virtio'>hvm</type>
    <boot dev='hd'/>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/mapper/36005076303ffc52a000000000000131d'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <interface type='network'>
      <mac address='02:ba:fe:fe:fc:9e'/>
      <source network='default'/>
      <target dev='vnet3'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
    </interface>
    <interface type='network'>
      <mac address='52:54:00:8f:dc:8d'/>
      <source network='virbr1'/>
      <target dev='vnet4'/>
      <model type='virtio'/>
      <alias name='net1'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
    </interface>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='sclp' port='0'/>
      <alias name='console0'/>
    </console>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='ccw' cssid='0xfe' ssid='0x3' devno='0xffba'/>
    </memballoon>
  </devices>
  <seclabel type='none'/>
</domain>
Comment 1 chandrashekar shastri 2013-10-22 10:29:27 EDT
[root@phx2 ~]# cat /etc/issue
Fedora release 19 (Schrödinger’s Cat)
Kernel \r on an \m (\l)

[root@phx2 ~]# uname -a
Linux phx2.in.ibm.com 3.9.5-301.fc19.x86_64-EINJ #1 SMP Wed Sep 18 05:55:39 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@phx2 ~]# libvirtd --version
libvirtd (libvirt) 1.1.3
[root@phx2 ~]# qemu-system-x86_64 --version
QEMU emulator version 1.6.50, Copyright (c) 2003-2008 Fabrice Bellard
[root@phx2 ~]# cd /home/libvirt/
[root@phx2 libvirt]# git log | head
commit 8ebd3d889296b03fc51f34b2f6f5c5d523694559
Author: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Date:   Fri Oct 18 10:12:00 2013 +0800

    daemon: don't free domain if it's null
    
    If we fail to get domain, we had to judge whether
    it's null or not when doing 'cleanup'.
    
    Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
[root@phx2 libvirt]# cd ../qemu/
[root@phx2 qemu]# git log | head
commit 1680d485777ecf436d724631ea8722cc0c66990e
Merge: ded77da f8da40a
Author: Anthony Liguori <aliguori@amazon.com>
Date:   Mon Oct 14 09:59:59 2013 -0700

    Merge remote-tracking branch 'rth/tcg-ldst-6' into staging
    
    # By Richard Henderson
    # Via Richard Henderson
    * rth/tcg-ldst-6:
[root@phx2 qemu]#
Comment 2 chandrashekar shastri 2014-01-01 02:46:58 EST
When the network are inactive the guest fails to boot.

[root@localhost ~]# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 bridged              inactive   no            yes
 default              inactive   no            yes
 net                  inactive   no            yes


[root@localhost ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     Fedora-19-x86_64               shut off
 -     RHEL7_qcow2                    shut off
 -     santwana-guest                 shut off
 -     santwana-test                  shut off
 -     sath                           shut off
 -     test                           shut off
 -     test-sanjeev                   shut off
 -     virt-tests-vm1                 shut off

[root@localhost ~]# virsh start Fedora-19-x86_64
error: Failed to start domain Fedora-19-x86_64
error: internal error: Network 'net' is not active.

[root@localhost ~]#
Comment 3 Xavier G. 2014-04-26 16:57:35 EDT
I confirm the behaviour described by the initial reporter.

I am using libvirt 0.9.12.3 as supplied by Debian Wheezy along with qemu-kvm. I am playing around with two VMs. Each VM has a virtual network interface to a virtual network named "internal", managed through the "virbr1" bridge.
When running net-destroy then net-start (typically to apply some configuration change), it appears net-start does not add the NICs of the running VMs (vnet0, vnet1) to the freshly re-created virbr1 bridge, thus making impacted guests unreachable through the network. As a matter of fact, running "brctl addif virbr1 vnet0" suffices to work around that issue (I have not studied the impact on firewall rules though). However, one would intuitively expect libvirt to take care of this automatically.

=======================================================================
1 - Situation when the virtual network is started with all VMs shut down:

# ifconfig (output filtered)
virbr1    Link encap:Ethernet  HWaddr 52:54:00:90:00:a9
          inet addr:10.189.64.1  Bcast:10.189.79.255  Mask:255.255.240.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.00247e16eba8       yes             eth0
virbr1          8000.5254009000a9       yes             virbr1-nic

# ip a (output filtered)
43: virbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 52:54:00:90:00:a9 brd ff:ff:ff:ff:ff:ff
    inet 10.189.64.1/20 brd 10.189.79.255 scope global virbr1
44: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master virbr1 state DOWN qlen 500
    link/ether 52:54:00:90:00:a9 brd ff:ff:ff:ff:ff:ff

=======================================================================
2 - Situation with two VMs running:
# ifconfig
virbr1    Link encap:Ethernet  HWaddr 52:54:00:90:00:a9
          inet addr:10.189.64.1  Bcast:10.189.79.255  Mask:255.255.240.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:196 errors:0 dropped:0 overruns:0 frame:0
          TX packets:203 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:15790 (15.4 KiB)  TX bytes:21084 (20.5 KiB)

vnet0     Link encap:Ethernet  HWaddr fe:54:00:90:8c:58
          inet6 addr: fe80::fc54:ff:fe90:8c58/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:178 errors:0 dropped:0 overruns:0 frame:0
          TX packets:263 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:16718 (16.3 KiB)  TX bytes:24861 (24.2 KiB)

vnet1     Link encap:Ethernet  HWaddr fe:54:00:70:79:65
          inet6 addr: fe80::fc54:ff:fe70:7965/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:118 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:1816 (1.7 KiB)  TX bytes:11289 (11.0 KiB)

# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.00247e16eba8       yes             eth0
virbr1          8000.5254009000a9       yes             virbr1-nic
                                                        vnet0
                                                        vnet1

# ip a (output filtered)
43: virbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 52:54:00:90:00:a9 brd ff:ff:ff:ff:ff:ff
    inet 10.189.64.1/20 brd 10.189.79.255 scope global virbr1
44: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master virbr1 state DOWN qlen 500
    link/ether 52:54:00:90:00:a9 brd ff:ff:ff:ff:ff:ff
45: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr1 state UNKNOWN qlen 500
    link/ether fe:54:00:90:8c:58 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe90:8c58/64 scope link
       valid_lft forever preferred_lft forever
46: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr1 state UNKNOWN qlen 500
    link/ether fe:54:00:70:79:65 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe70:7965/64 scope link
       valid_lft forever preferred_lft forever

=======================================================================
3 - Situation after having destroyed the network:
# virsh net-destroy internal
Network internal destroyed

# ifconfig (output filtered -- virbr1 disappeared, vnet0 and vnet1 are still there)
vnet0     Link encap:Ethernet  HWaddr fe:54:00:90:8c:58
          inet6 addr: fe80::fc54:ff:fe90:8c58/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:235 errors:0 dropped:0 overruns:0 frame:0
          TX packets:347 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:22080 (21.5 KiB)  TX bytes:31621 (30.8 KiB)

vnet1     Link encap:Ethernet  HWaddr fe:54:00:70:79:65
          inet6 addr: fe80::fc54:ff:fe70:7965/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:144 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:1816 (1.7 KiB)  TX bytes:12641 (12.3 KiB)

# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.00247e16eba8       yes             eth0

# ip a
45: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:54:00:90:8c:58 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe90:8c58/64 scope link
       valid_lft forever preferred_lft forever
46: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:54:00:70:79:65 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe70:7965/64 scope link
       valid_lft forever preferred_lft forever

=======================================================================
5 - Situation after having restarted the network:
# virsh net-start internal
Network internal started

# ifconfig (output filtered -- virbr1 is back)
virbr1    Link encap:Ethernet  HWaddr 52:54:00:90:00:a9
          inet addr:10.189.64.1  Bcast:10.189.79.255  Mask:255.255.240.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vnet0     Link encap:Ethernet  HWaddr fe:54:00:90:8c:58
          inet6 addr: fe80::fc54:ff:fe90:8c58/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:235 errors:0 dropped:0 overruns:0 frame:0
          TX packets:347 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:22080 (21.5 KiB)  TX bytes:31621 (30.8 KiB)

vnet1     Link encap:Ethernet  HWaddr fe:54:00:70:79:65
          inet6 addr: fe80::fc54:ff:fe70:7965/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:144 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:1816 (1.7 KiB)  TX bytes:12641 (12.3 KiB)

# brctl show (only virbr1-nic is attached to the bridge, vnet0 and vnet1 are not)
bridge name     bridge id               STP enabled     interfaces
br0             8000.00247e16eba8       yes             eth0
virbr1          8000.5254009000a9       yes             virbr1-nic

# ip a (output filtered)
45: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:54:00:90:8c:58 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe90:8c58/64 scope link
       valid_lft forever preferred_lft forever
46: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:54:00:70:79:65 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe70:7965/64 scope link
       valid_lft forever preferred_lft forever
47: virbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 52:54:00:90:00:a9 brd ff:ff:ff:ff:ff:ff
    inet 10.189.64.1/20 brd 10.189.79.255 scope global virbr1
48: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master virbr1 state DOWN qlen 500
    link/ether 52:54:00:90:00:a9 brd ff:ff:ff:ff:ff:ff
Comment 4 Cole Robinson 2016-04-10 11:08:54 EDT
Sorry this never received a timely response. If a network is restarted, I think the only way to get back VM connectivity is to set the VM NIC link down and then back up, see virsh domif-getlink/domif-setlink

There's bugs filed elsewhere to improve this situation:

starting networks on demand with a VM: https://bugzilla.redhat.com/show_bug.cgi?id=960981
providing a virtual network 'restart' which keeps VM connectivity: https://bugzilla.redhat.com/show_bug.cgi?id=1014554

Duping to the latter

*** This bug has been marked as a duplicate of bug 1014554 ***

Note You need to log in before you can comment on or make changes to this bug.