Bug 1174611

Summary: Upgrade from RHEV-H 6.5 to RHEV-H 6.6 during upgrade from RHEV 3.4 to RHEV 3.5 Wiped Network Static IPs
Product: Red Hat Enterprise Virtualization Manager Reporter: Scott Herold <sherold>
Component: vdsmAssignee: Dan Kenigsberg <danken>
Status: CLOSED ERRATA QA Contact: Michael Burman <mburman>
Severity: high Docs Contact:
Priority: high    
Version: 3.5.0CC: bazulay, cshao, danken, dfediuck, ecohen, fdeutsch, gklein, gouyang, hadong, huiwa, iheim, leiwang, lpeer, lsurette, lvernia, michal.skrivanek, nyechiel, pnovotny, pstehlik, sherold, ybronhei, ycui, yeylon, ylavi
Target Milestone: ---   
Target Release: 3.5.0   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: network
Fixed In Version: vdsm-4.16.8.1-6.el6ev Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-02-15 09:14:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1164308, 1164311    
Attachments:
Description Flags
vdsm logs
none
nic-up.png
none
vdsm logs comment16
none
screenshot boot: network and vdsm
none
network-rhevh-before.png
none
network-rhevh-after.png
none
output sleep 0.5
none
output sleep 5 none

Description Scott Herold 2014-12-16 05:53:05 UTC
Description of problem:

The upgrade process of the Manager itself was incredibly smooth.  Upon upgrade of the manager to 3.5, everything came back up with no issues, and all my VMs were happy.  This is where I made my first mistake (User error).  I upgraded my RHEV-H hosts to what I THOUGHT was the latest build that would be compatible with 3.5.  It turns out I was just updating the latest 6.5 RHEV-H image, which I apparently had neglected to do in the past.  Upon upgrade and moving all hosts to maintenance, I upgraded my cluster compatibility to RHEV 3.5.  Of course this is where I noticed my fatal flaw, as all of my hosts failed to non-operational status, since there was a version mismatch of 3.4 vdsm in a 3.5 cluster type.

I got a hold of the RHEV-H 6.6 build (20141212) after talking with Fabian on IRC this morning during the build meeting.  Once I installed the RPM on the engine, I was able to upgrade all of my hosts to the latest 6.6 build.  Upgrade went smoothly, but I still ran into issues where my hosts were remaining non-operational.  During the upgrade process to the RHEV-H 6.6 image, my secondary IP addresses were wiped.  In my environment, my primary interface is RHEV-M, and is used for VM traffic (using VLANs), Console access, and Live Migration.  I have a secondary, independent 1 Gb network dedicated to my iSCSI and NFS storage.  This uses a completely independent network switch and shares nothing with the communication network.  My static IPs for these interfaces on all 3 hosts were wiped clean.

I was able to re-enter the IP on the first Host, and it quickly entered an operational state.  The other two hosts required a reboot before they would enter an operational state, even though all networking was 100% functional from the console of the hosts.  After rebooting those hosts, then upgrading my Datacenter to Compat Version 3.5, I was finally 100% operational running on vt13.3.  

Version-Release number of selected component (if applicable):
vt13.3 with RHEV-H 6.6 20141212.0

How reproducible:
All three of my RHEV-H nodes failed with the exact same issue

Steps to Reproduce:
Fully documented above

Actual results:
Static IPs from secondary network interfaces were cleared, and I had to re-enter the IP, subnet, and gateway information.

Expected results:
Seamless upgrade including all static IPs

Additional info:
I SHOULD be able to provide either log files, or provide a login directly to the RHEV-M or RHEV-H nodes (SSH).  My environment is at your disposal upon request.

Comment 1 haiyang,dong 2014-12-16 07:44:31 UTC
Hey sherold,

could you provide clear details steps to reproduce this bug? From your description of problem, i didn't know how to reproduce it, thanks.

Comment 2 Scott Herold 2014-12-16 13:14:54 UTC
I discovered this during a larger upgrade process, so don't know if it was the RHEV-H upgrade or the RHEV-M upgrade that ultimately led to issues.

1) Updated RHEV-M from 3.4 to 3.5(vt13.3)
2) Ensured all hosts were upgraded to latest RHEV-H 6.5 for 3.4
3) Moved hosts into maintenance mode
4) Updated Cluster Compatibility version to 3.5
5) Upgraded all hosts to RHEV-H 6.6 20141212.0
6) Tried to reactivate RHEV-H hosts to find that static IPs on secondary NICs were reset
7) Reset all static IPs for secondary NICs
8) 1st Host came online by itself.  
9) 2 other hosts required reboots, even though all network connectivity on the seondary network was fully functional from the RHEV-H host's console

Comment 4 haiyang,dong 2014-12-16 14:41:37 UTC
I couldn't reproduce this bug with the follow steps:
1. Clean install RHEV-H 6.5u8 for RHEV 3.4.3 (rhev-hypervisor6-6.5-20141017.0)
2. Configure RHEV-H 6.5u8 network with ipv4+static
3. Add RHEV-H 6.5u8 into rhevm latest 3.5 build(vt13.3) with Data Centers 3.4 +  Cluster Compatibility version 3.4
4. Moved RHEV-H 6.5u8 into maintenance mode
5. Change Data Centers and Cluster Compatibility version into 3.5
6. Upgraded RHEV-H 6.5u8 to RHEV-H 6.6 20141212.0 via rhevm webui

After step6 , upgrade rhevh success and network configuration with static also keep the same after boot the new rhevh version, and the status of rhevh host also was up in rhevm side.

Comment 5 Scott Herold 2014-12-16 22:47:31 UTC
(In reply to haiyang,dong from comment #4)
> I couldn't reproduce this bug with the follow steps:
> 1. Clean install RHEV-H 6.5u8 for RHEV 3.4.3
> (rhev-hypervisor6-6.5-20141017.0)
> 2. Configure RHEV-H 6.5u8 network with ipv4+static

I assume for this step you configured multiple interfaces, and not only the primary/RHEVM interface?

> 3. Add RHEV-H 6.5u8 into rhevm latest 3.5 build(vt13.3) with Data Centers
> 3.4 +  Cluster Compatibility version 3.4
> 4. Moved RHEV-H 6.5u8 into maintenance mode
> 5. Change Data Centers and Cluster Compatibility version into 3.5
> 6. Upgraded RHEV-H 6.5u8 to RHEV-H 6.6 20141212.0 via rhevm webui
> 
> After step6 , upgrade rhevh success and network configuration with static
> also keep the same after boot the new rhevh version, and the status of rhevh
> host also was up in rhevm side.

If this passes in your environment based on verification from above, I'm happy to claim this is environmental in my environment, which is always running latest builds on an almost weekly basis and is not a "normal" customer environment.

Comment 6 Doron Fediuck 2014-12-29 08:12:23 UTC
Based on comment 5, can you please try to reproduce on a multiple NIC machine?

Comment 7 haiyang,dong 2014-12-29 09:35:38 UTC
(In reply to Scott Herold from comment #5)
> (In reply to haiyang,dong from comment #4)
> 
> I assume for this step you configured multiple interfaces, and not only the
> primary/RHEVM interface?
> 

Hey sherold,

I want to make it clean in here before try to reproduce, as i know we could only configure primary interface(mean only could configure one nic, not multiple NICs) via TUI menu in rhevh.

could you tell me what your steps to configure the second nic with static IPs,
(a) via Setup Host Networks window in rhevm admin portal
(b) via Press "F2" to into shell console of rhevh and configure ifcfg-* fils in /etc/sysconfig/network-scripts/
(c) or others method

Comment 10 Douglas Schilling Landgraf 2014-12-30 02:14:01 UTC
Created attachment 974266 [details]
vdsm logs

Comment 11 Douglas Schilling Landgraf 2014-12-30 02:16:48 UTC
virsh # net-list

Name          State      Autostart    Persistent
-------------------------------------------------------
;vdsmdummy;   active     no           no
vdsm-rhrevm   active     yes          yes

Comment 12 Scott Herold 2014-12-30 04:24:22 UTC
(In reply to haiyang,dong from comment #7)
> (In reply to Scott Herold from comment #5)
> > (In reply to haiyang,dong from comment #4)
> > 
> > I assume for this step you configured multiple interfaces, and not only the
> > primary/RHEVM interface?
> > 
> 
> Hey sherold,
> 
> I want to make it clean in here before try to reproduce, as i know we could
> only configure primary interface(mean only could configure one nic, not
> multiple NICs) via TUI menu in rhevh.
> 
> could you tell me what your steps to configure the second nic with static
> IPs,
> (a) via Setup Host Networks window in rhevm admin portal
> (b) via Press "F2" to into shell console of rhevh and configure ifcfg-* fils
> in /etc/sysconfig/network-scripts/
> (c) or others method

I set the static IP for the rhev-m interface in the TUI.  Once I had the host properly connected to RHEV-M, 

1) I went into the RHEV-M UI
2) In the "Networks" tab, I created a new network called "nfs" with all default settings
3) Drilled down to the Hosts Tab
4) Went into the Network Interfaces sub-tab
5) Setup Host Networks
6) Dragged "nfs" network to eth1
7) Edited new Network/Nic combo to run Static IP with NO GATEWAY (Non-routable storage-dedicated network)
8) Ensured "Save network configuration" was checked, and clicked OK

Comment 13 cshao 2014-12-30 09:15:28 UTC
RHEVH QE can't reproduce this issue.

Test version:
rhev-hypervisor6-6.5-20141017.0
rhev-hypervisor6-6.6-20141218.0.el6ev
ovirt-node-3.1.0-0.37.20141218gitcf277e1.el6.noarch
vdsm-4.16.8.1-4.el6ev.x86_64
RHEVM vt13.4(rhevm-3.5.0-0.26.el6ev.noarch)

Test steps:
1. Installed rhev-hypervisor6-6.5-20141017.0 in a server with 2 nics (eth0 and eth1) 
2. Configured RHEV-H as static ip(eth1:192.168.22.162) and registered/approved into RHEV-M 3.5 (Data center 3.4)
3. Login into the RHEV-M UI
4. In the "Networks" tab, create a new network called "cshao" with all default settings
5. Drilled down to the Hosts Tab
6. Went into the Network Interfaces sub-tab-> setup Host Network.
7. Dragged "cshao" network to eth0
8. Edited new Network/Nic combo to run Static IP (IP: 192.168.22.163 Subnet Mask:255.255.255.0  Gateway:192.168.22.1)
9. Ensured "Save network configuration" was checked, and clicked OK.
10. Moved hosts into maintenance mode.
11. Upgrade the RHEV-H to rhev-hypervisor6-6.6-20141218.0.el6ev.
12. Updated Cluster Compatibility version to 3.5.

Test result:
All nics can obtain IP include the secondary network interfaces.

Please see attachment "nic-up.png" for more details.

Comment 14 cshao 2014-12-30 09:16:04 UTC
Created attachment 974329 [details]
nic-up.png

Comment 15 cshao 2014-12-30 10:07:14 UTC
Update new test result to here according #c8.

Test version:
rhev-hypervisor6-6.5-20141017.0
rhev-hypervisor6-6.6-20141218.0.el6ev
ovirt-node-3.1.0-0.37.20141218gitcf277e1.el6.noarch
vdsm-4.16.8.1-4.el6ev.x86_64
RHEVM vt13.4(rhevm-3.5.0-0.26.el6ev.noarch)

Test steps:
1. Installed rhev-hypervisor6-6.5-20141017.0 in a server with 2 nics (eth0 and eth1) 
2. Configured RHEV-H as static ip(eth1:192.168.22.162) and registered/approved into RHEV-M 3.5 (Data center 3.4)
3. In RHEV-H pressed F2 to console and configure the additional nic(eth0) settings, example:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
 DEVICE=eth0
 BOOTPROTO=static
 IPADDR=192.168.22.163
 NETMASK=255.255.255.0
 GATEWAY=192.168.22.0
 ONBOOT=yes

Execute persist:
# persist /etc/sysconfig/network-scripts/ifcfg-eth0

4. Install the rhev-hypervisor6-6.6-20141218.0.el6ev RPM on RHEV-M machine 
5. Put RHEV-H into maintenance mode in Web Admin
6. Via webadmin click Upgrade and execute the upgrade to rhev-h-20141218.0.

Test result:
After reboot rhev-m static ip address is gone and host is Non Responsive mode.
Maybe it is related with Bug 1176048 - [6.6-3.5]Failed to upgrade hypervisor via RHEVM 3.5

Comment 18 Douglas Schilling Landgraf 2014-12-30 11:48:00 UTC
Created attachment 974347 [details]
vdsm logs comment16

Comment 19 Douglas Schilling Landgraf 2014-12-30 11:49:05 UTC
Created attachment 974348 [details]
screenshot boot: network and vdsm

Comment 20 Douglas Schilling Landgraf 2014-12-30 12:18:16 UTC
(In reply to shaochen from comment #15)

> Test result:
> After reboot rhev-m static ip address is gone and host is Non Responsive
> mode.
> Maybe it is related with Bug 1176048 - [6.6-3.5]Failed to upgrade hypervisor
> via RHEVM 3.5

Hi shaochen,

Thanks for confirming the comment#8 report. However, I don't think it's related to bug#1176048 as in this bug the reboot never happens and it's related to ovirt-node-plugin-vdsm. It seems that comment#8 is affected with the same root cause as this bug. Do you mind to re-test using my comment#18 as base please? 

Thanks!

Comment 21 Dan Kenigsberg 2014-12-30 13:41:35 UTC
vdsm/upgrade.log has

MainThread::DEBUG::2014-12-30 11:38:18,387::unified_persistence::45::root::(run) upgrade-unified-persistence upgrade persisting networks {'rhevm': {'nic': 'eth0', 'stp': False, 'bridged': True, 'mtu': 1500}, 'nfs': {'nic': 'eth1', 'stp': False, 'bridged': True, 'mtu': 1500}} and bondings {}

which lacks IP addresses.

Comment 22 Dan Kenigsberg 2014-12-30 14:13:14 UTC
This log is created when vdsm is started, after vdsm code has been upgraded. The fact that addresses are missing there suggest either of a vdsm bug (which I do not see at the moment) or that networking was down when vdsmd service was first started.

Could the latter hypothesis be true? Does rhev-h upgrade ever start vdsm before network is up?

Comment 23 cshao 2014-12-31 02:43:44 UTC
(In reply to Douglas Schilling Landgraf from comment #20)
> (In reply to shaochen from comment #15)
> 
> > Test result:
> > After reboot rhev-m static ip address is gone and host is Non Responsive
> > mode.
> > Maybe it is related with Bug 1176048 - [6.6-3.5]Failed to upgrade hypervisor
> > via RHEVM 3.5
> 
> Hi shaochen,
> 
> Thanks for confirming the comment#8 report. However, I don't think it's
> related to bug#1176048 as in this bug the reboot never happens and it's
> related to ovirt-node-plugin-vdsm. It seems that comment#8 is affected with
> the same root cause as this bug. Do you mind to re-test using my comment#18
> as base please? 
> 
> Thanks!
 
I have reproduced this issue this time.

Note: for #c13, step 4, in the "Networks" tab, after create a new network, I unselect the "vm network" option,  so all nics can obtain IP include the secondary network interfaces.
This time I click the "vm network" option, so reproduced this issue.

Test version:
rhev-hypervisor6-6.5-20141017.0
rhev-hypervisor6-6.6-20141218.0.el6ev
ovirt-node-3.1.0-0.37.20141218gitcf277e1.el6.noarch
vdsm-4.16.8.1-4.el6ev.x86_64
RHEVM vt13.4(rhevm-3.5.0-0.26.el6ev.noarch)

Test steps:
1. Installed rhev-hypervisor6-6.5-20141017.0 in a server with 4 nics (eth0 and eth1, eth2, eth3) 
2. Configured RHEV-H as static ip(eth1:192.168.22.163) and registered/approved into RHEV-M 3.5 (Data center 3.4)
3. Login into the RHEV-M UI
4. In the "Networks" tab, create a new network called "nfs" with all default settings
5. Drilled down to the Hosts Tab
6. Went into the Network Interfaces sub-tab-> setup Host Network.
7. Dragged "nfs" network to eth2
8. Edited new Network/Nic combo to run Static IP (IP: 192.168.22.162 Subnet Mask:255.255.255.0  Gateway:192.168.22.1)
9. Ensured "Save network configuration" was checked, and clicked OK.
10. Moved hosts into maintenance mode.
11. Upgrade the RHEV-H to rhev-hypervisor6-6.6-20141218.0.el6ev.
12. Updated Cluster Compatibility version to 3.5.

Test result:
No static ip address anymore, At this stage, host is Non Responsive status.
Please see attachment for more details.

Comment 24 cshao 2014-12-31 02:44:41 UTC
Created attachment 974673 [details]
network-rhevh-before.png

Comment 25 cshao 2014-12-31 02:45:40 UTC
Created attachment 974674 [details]
network-rhevh-after.png

Comment 26 Douglas Schilling Landgraf 2015-01-02 12:41:02 UTC
(In reply to Dan Kenigsberg from comment #22)
> This log is created when vdsm is started, after vdsm code has been upgraded.
> The fact that addresses are missing there suggest either of a vdsm bug
> (which I do not see at the moment) or that networking was down when vdsmd
> service was first started.
> 
> Could the latter hypothesis be true? Does rhev-h upgrade ever start vdsm
> before network is up?

My understanding is that it should not happen as VDSM trigger all needed services during the start as listed below. What we do at boot is run hooks [1].

init/sysvinit/vdsmd.init.in
<snip>
NEEDED_SERVICES="multipathd rpcbind ntpd wdmd sanlock network libvirtd
                 supervdsmd"

</snip>

<snip>
start() {
    test_already_running && return 0

    shutdown_conflicting_srv "${CONFLICTING_SERVICES}" || return 1
    start_needed_srv "${NEEDED_SERVICES}" || return 1

    # "service iscsid start" may not start becasue we configure node.startup to
    # manual. See /etc/init.d/iscsid.
    service iscsid status >/dev/null 2>&1 || service iscsid force-start \
        || return 1

    "${VDSMD_INIT_COMMON}" --pre-start || return 1

    echo $"Starting up vdsm daemon: "
</snip>

However, what I noticed from my last debugging is that might be happening here is a race as if I introduce a delay like 20s after start_needed_srv call and upgrade executed correctly, host get UP with all interface setup.

Here example of data after upgrade happened, host already rebooted and VDSM is about to start and execute the upgrade of ifcfg files:

# cat /etc/init.d/vdsmd 
<snip>
start() {
    test_already_running && return 0

    shutdown_conflicting_srv "${CONFLICTING_SERVICES}" || return 1
    start_needed_srv "${NEEDED_SERVICES}" || return 1
    service vdsmd status &> /tmp/output
    service network status &>> /tmp/output
    ifconfig -a &>> /tmp/output
    cat /etc/sysconfig/network-scripts/ifcfg-eth0 &>> /tmp/output
    cat /etc/sysconfig/network-scripts/ifcfg-eth1 &>> /tmp/output
    cat /etc/sysconfig/network-scripts/ifcfg-rhevm &>> /tmp/output


# cat /tmp/output
VDS daemon is not running
Configured devices:
lo eth0 eth1 nfs rhevm
Currently active devices:
lo
eth0      Link encap:Ethernet  HWaddr 52:54:00:84:DE:3D
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:11 Base address:0x4000

eth1      Link encap:Ethernet  HWaddr 52:54:00:B5:36:58
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:10 Base address:0xe000

eth2      Link encap:Ethernet  HWaddr 52:54:00:62:E0:59
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:10

eth3      Link encap:Ethernet  HWaddr 52:54:00:FC:EB:6F
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:11 Base address:0x2000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:32 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2720 (2.6 KiB)  TX bytes:2720 (2.6 KiB)

# Generated by VDSM version 4.14.17-1.el6ev
DEVICE=eth0
ONBOOT=yes
HWADDR=52:54:00:84:de:3d
BRIDGE=rhevm
NM_CONTROLLED=no
PEERNTP=yes
# Generated by VDSM version 4.14.17-1.el6ev
DEVICE=eth1
ONBOOT=yes
HWADDR=52:54:00:b5:36:58
BRIDGE=nfs
MTU=1500
NM_CONTROLLED=no
STP=no
# Generated by VDSM version 4.14.17-1.el6ev
DEVICE=rhevm
ONBOOT=yes
TYPE=Bridge
DELAY=0
IPADDR=192.168.122.170
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
PEERNTP=yes
HOTPLUG=no


[1] https://github.com/oVirt/ovirt-node-plugin-vdsm/tree/master/hooks/on-boot

Comment 27 Douglas Schilling Landgraf 2015-01-05 16:00:01 UTC
Created attachment 976503 [details]
output sleep 0.5

Comment 28 Douglas Schilling Landgraf 2015-01-05 16:04:16 UTC
I will attach a few output files with the below sequence.

 <snip>
 start_needed_srv "${NEEDED_SERVICES}" || return 1
 service vdsmd status &> /tmp/output
 service network status &>> /tmp/output
 ifconfig -a &>> /tmp/output
 sleep XX
 ifconfig -a &>> /tmp/output
 cat /etc/sysconfig/network-scripts/ifcfg-eth0 &>> /tmp/output
 cat /etc/sysconfig/network-scripts/ifcfg-eth1 &>> /tmp/output
 cat /etc/sysconfig/network-scripts/ifcfg-rhevm &>> /tmp/output
 cat /etc/sysconfig/network-scripts/ifcfg-nfs &>> /tmp/output

Comment 29 Douglas Schilling Landgraf 2015-01-05 19:19:29 UTC
Created attachment 976550 [details]
output sleep 5

Comment 30 Douglas Schilling Landgraf 2015-01-06 16:19:39 UTC
Hi Dan,

I did other tests as we talked, please see below the data.

#1) I have execute the upgrade and rhevm/nfs interfaces are clean again, status of node: None Responsive.
 
#2) I have setup rhevm and nfs ip address as static

DEVICE=rhevm
ONBOOT=yes
TYPE=Bridge
DELAY=0
IPADDR=192.168.122.170
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
BOOTPROTO=none
DEFROUTE=yes
NM_CONTROLLED=no
PEERNTP=yes
HOTPLUG=no

DEVICE=nfs
ONBOOT=yes
TYPE=Bridge
DELAY=0
IPADDR=192.168.122.65
NETMASK=255.255.255.0
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
STP=no
HOTPLUG=no


# service network stop
# time service network start    [OK]
Bringing up loopback interface: [OK]
Bringing up interface eth0:     [OK]
Bringing up interface nfs:      Determining if ip address 192.168.122.65 is already in use for device nfs [OK]
Bringing up interface rhevm:    Determining if ip address 192.168.122.65 is already in use for device rhevm [OK]

real 0m4.384s
user 0m0.224s
sys  0m0.135s


# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 52:54:00:b5:36:58 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:62:e0:59 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:fc:eb:6f brd ff:ff:ff:ff:ff:ff
9: ;vdsmdummy;: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 6a:56:82:a3:da:d5 brd ff:ff:ff:ff:ff:ff
14: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
15: nfs: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 86:92:f9:61:69:17 brd ff:ff:ff:ff:ff:ff


Additionally to that, I have tried to execute the same stop and network once with the below script:

service network stop
time service network start &
for ((i=0; i<20; i++); do date; ip link show rhevm; sleep 1; done

real 0m5.042s (This time floats from 0m3. to 0m5.)
user 0m0.122s
sys  0m0.053s

Tue Jan  6 16:14:55 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:14:56 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:14:57 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:14:58 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:14:59 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:00 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:01 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:02 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:03 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:04 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:05 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:06 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:07 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:08 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:09 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:10 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:11 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:12 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:13 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
Tue Jan  6 16:15:14 UTC 2015
22: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff

Comment 31 Douglas Schilling Landgraf 2015-01-06 16:31:07 UTC
Hi Dan,

Other points that I would like to share from supervdsm.log, see my comments inline with *.

MainThread::DEBUG::2015-01-06 15:00:40,248::vdsm-restore-net-config::55::root::(unified_restoration) Removing all networks ({'rhevm': {'remove': True}, 'nfs': {'remove': True}}) and bonds ({}) in running config.

* Ok, vdsm will remove all networks

=======

MainThread::DEBUG::2015-01-06 15:00:40,286::api::619::setupNetworks::(setupNetworks) Setting up network according to configuration: networks:{'rhevm': {'remove': True}, 'nfs': {'remove': True}}, bondings:{}, options:{'_inRollback': True, 'connectivityCheck': False}


MainThread::DEBUG::2015-01-06 15:00:40,291::api::640::setupNetworks::(setupNetworks) Removing network 'rhevm'


MainThread::INFO::2015-01-06 15:00:40,292::api::426::root::(delNetwork) Removing network rhevm with vlan=None, bonding=None, nics=['eth0'],options={}


MainThread::DEBUG::2015-01-06 15:00:40,293::ifcfg::324::root::(_atomicNetworkBackup) Backed up rhevm


* Backed up ifcfg-rhevm

==========

MainThread::DEBUG::2015-01-06 15:00:40,296::models::172::root::(remove) Removing bridge Bridge(rhevm: Nic(eth0))
MainThread::DEBUG::2015-01-06 15:00:40,296::utils::739::root::(execCmd) /sbin/ifdown rhevm (cwd None)

MainThread::DEBUG::2015-01-06 15:00:40,434::utils::759::root::(execCmd) SUCCESS: <err> = 'RTNETLINK answers: No such file or directory\nRTNETLINK answers: No such file or directory\n'; <rc> = 0
MainThread::DEBUG::2015-01-06 15:00:40,434::__init__::133::root::(_removeSourceRoute) Removing source route for device rhevm

MainThread::DEBUG::2015-01-06 15:00:40,434::ifcfg::371::root::(_atomicBackup) Backed up /etc/sysconfig/network-scripts/rule-rhevm

MainThread::DEBUG::2015-01-06 15:00:40,960::ifcfg::281::root::(_removeFile) Removed file /etc/sysconfig/network-scripts/route-rhevm
MainThread::DEBUG::2015-01-06 15:00:40,962::utils::739::root::(execCmd) 
/usr/sbin/brctl delbr rhevm (cwd None)
MainThread::DEBUG::2015-01-06 15:00:40,986::utils::759::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2015-01-06 15:00:40,986::ifcfg::371::root::(_atomicBackup) Backed up /etc/sysconfig/network-scripts/ifcfg-rhevm

* Backed up

MainThread::DEBUG::2015-01-06 15:00:41,039::ifcfg::281::root::(_removeFile) Removed file /etc/sysconfig/network-scripts/ifcfg-rhevm

* Removed the rhevm


MainThread::DEBUG::2015-01-06 15:00:41,043::utils::739::root::(execCmd) /sbin/ifdown eth0 (cwd None)
MainThread::DEBUG::2015-01-06 15:00:41,112::utils::759::root::(execCmd) SUCCESS: <err> = 'bridge rhevm does not exist!\n'; <rc> = 0


MainThread::DEBUG::2015-01-06 15:00:42,990::ifcfg::281::root::(_removeFile) Removed file /etc/sysconfig/network-scripts/ifcfg-eth1
MainThread::DEBUG::2015-01-06 15:00:43,038::ifcfg::281::root::(_removeFile) Removed file /etc/sysconfig/network-scripts/ifcfg-eth0
MainThread::DEBUG::2015-01-06 15:00:43,038::netconfpersistence::134::root::(_getConfigs) Non-existing config set.
MainThread::DEBUG::2015-01-06 15:00:43,039::netconfpersistence::134::root::(_getConfigs) Non-existing config set.

MainThread::DEBUG::2015-01-06 15:00:43,049::vdsm-restore-net-config::68::root::(unified_restoration) Calling setupNetworks with networks ({'rhevm': {u'nic': u'eth0', u'stp': False, u'bridged': True, u'mtu': 1500}, 'nfs': {u'nic': u'eth1', u'stp': False, u'bridged': True, u'mtu': 1500}}) and bond ({}).


MainThread::DEBUG::2015-01-06 15:00:43,061::api::619::setupNetworks::(setupNetworks) Setting up network according to configuration: networks:{'rhevm': {u'nic': u'eth0', u'stp': False, u'bridged': True, u'mtu': 1500}, 'nfs': {u'nic': u'eth1', u'stp': False, u'bridged': True, u'mtu': 1500}}, bondings:{}, options:{'_inRollback': True, 'connectivityCheck': False}


MainThread::DEBUG::2015-01-06 15:00:43,082::api::678::setupNetworks::(setupNetworks) Adding network 'rhevm'
MainThread::DEBUG::2015-01-06 15:00:43,082::api::278::root::(addNetwork) validating network...
MainThread::INFO::2015-01-06 15:00:43,082::api::300::root::(addNetwork) Adding network rhevm with vlan=None, bonding=None, nics=[u'eth0'], bondingOptions=None, mtu=1500, bridged=True, defaultRoute=True,options={u'stp': False, 'implicitBonding': True}
MainThread::DEBUG::2015-01-06 15:00:43,083::ifcfg::538::root::(writeConfFile) Writing to file /etc/sysconfig/network-scripts/ifcfg-rhevm configuration:
# Generated by VDSM version 4.16.8.1-4.el6ev
DEVICE=rhevm
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
HOTPLUG=no

* VDSM created the new ifcfg-rhevm but doesnt contain the same settings from the previous one. So my question is, why vdsm do not use the backup file to re-create the ifcfg-* interfaces? Could it help on the network delay to start vs vdsm start?


MainThread::DEBUG::2015-01-06 15:00:43,275::utils::739::root::(execCmd) /sbin/ifdown eth0 (cwd None)
MainThread::DEBUG::2015-01-06 15:00:43,340::utils::759::root::(execCmd) SUCCESS: <err> = 'bridge rhevm does not exist!\n'; <rc> = 0
MainThread::DEBUG::2015-01-06 15:00:43,340::utils::739::root::(execCmd) /sbin/ifup eth0 (cwd None)
MainThread::DEBUG::2015-01-06 15:00:43,397::utils::759::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2015-01-06 15:00:43,398::__init__::120::root::(_addSourceRoute) Adding source route: name=rhevm, addr=None, netmask=None, gateway=None
MainThread::ERROR::2015-01-06 15:00:43,398::__init__::126::root::(_addSourceRoute) invalid input for source routing: name=rhevm, addr=None, netmask=None, gateway=None
MainThread::DEBUG::2015-01-06 15:00:43,398::utils::739::root::(execCmd) /sbin/ifup rhevm (cwd None)
MainThread::DEBUG::2015-01-06 15:00:43,526::utils::759::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2015-01-06 15:00:43,529::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 43 edom: 19 level: 2 message: Network not found: no network with matching name 'vdsm-rhevm'
MainThread::DEBUG::2015-01-06 15:00:43,529::ifcfg::324::root::(_atomicNetworkBackup) Backed up rhevm
MainThread::INFO::2015-01-06 15:00:43,535::netconfpersistence::68::root::(setNetwork) Adding network rhevm({'nic': u'eth0', u'stp': False, u'bridged': True, u'mtu': 1500})


MainThread::DEBUG::2015-01-06 15:00:43,877::netconfpersistence::166::root::(_clearDisk) No existent config to clear.
MainThread::INFO::2015-01-06 15:00:43,878::netconfpersistence::182::root::(save) Saved new config RunningConfig({'rhevm': {'nic': u'eth0', u'stp': False, u'bridged': True, u'mtu': 1500}, 'nfs': {'nic': u'eth1', u'stp': False, u'bridged': True, u'mtu': 1500}}, {}) to /var/run/vdsm/netconf/nets/ and /var/run/vdsm/netconf/bonds/

Comment 32 Douglas Schilling Landgraf 2015-01-06 19:09:08 UTC
Hi Dan,

Here the new test:

init.d/vdsmd
===================
<snip>
start() {
    test_already_running && return 0

    shutdown_conflicting_srv "${CONFLICTING_SERVICES}" || return 1
    start_needed_srv "${NEEDED_SERVICES}" || return 1

    for ((i=0;i<50;i++)); do date +%s.%N; ip addr show rhevm; sleep 0.1; done &> /tmp/outputfor

    # "service iscsid start" may not start becasue we configure node.startup to
    # manual. See /etc/init.d/iscsid.
    service iscsid status >/dev/null 2>&1 || service iscsid force-start \
        || return 1
</snip>

# cat /tmp/outputfor
1420569681.696045679
1420569681.908322251
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.044644463
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.152514981
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.273276537
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.389358567
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.506301850
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.621381044
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.737330715
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.845306766
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569682.956342038
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.064510983
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.176475161
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.284390221
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.396451267
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.505398415
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.613363029
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.725358396
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.833401460
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569683.945331242
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569684.053358796
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569684.165334272
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569684.278670051
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569684.389296358
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420569684.498509314
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569684.621652524
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569684.744309509
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569684.865678688
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569684.972361173
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.074551996
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.176556046
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.278651581
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.381620462
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.483627729
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.585560105
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.687530802
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.789573786
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.891510729
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569685.993537679
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569686.095675291
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569686.197717699
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569686.300684564
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420569686.406224675
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569686.511352745
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569686.623278994
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569686.731362138
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569686.843299156
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569686.951334294
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569687.063368610
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420569687.171392565
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever

Then, ifconfig executed from shell:
eth0      Link encap:Ethernet  HWaddr 52:54:00:84:DE:3D
          inet6 addr: fe80::5054:ff:fe84:de3d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:101 errors:0 dropped:0 overruns:0 frame:0
          TX packets:50 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5166 (5.0 KiB)  TX bytes:3508 (3.4 KiB)
          Interrupt:11 Base address:0x4000

eth1      Link encap:Ethernet  HWaddr 52:54:00:B5:36:58
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:98 errors:0 dropped:0 overruns:0 frame:0
          TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5008 (4.8 KiB)  TX bytes:2864 (2.7 KiB)
          Interrupt:10 Base address:0xe000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:444 errors:0 dropped:0 overruns:0 frame:0
          TX packets:444 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:88767 (86.6 KiB)  TX bytes:88767 (86.6 KiB)

nfs       Link encap:Ethernet  HWaddr 52:54:00:B5:36:58
          inet addr:192.168.122.178  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:feb5:3658/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:54 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2584 (2.5 KiB)  TX bytes:636 (636.0 b)

rhevm     Link encap:Ethernet  HWaddr 52:54:00:84:DE:3D
          inet addr:192.168.122.170  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe84:de3d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:59 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2852 (2.7 KiB)  TX bytes:636 (636.0 b)


In the RHEV-M gui it changed the status from Reboot to pretty quick Non Responsive and then UP, all interfaces are UP.

Comment 33 Douglas Schilling Landgraf 2015-01-06 19:14:55 UTC
A second upgrade data using the same for ((i=0;i<50;i++)); do date +%s.%N; ip addr show rhevm; sleep 0.1; done &> /tmp/outputfor

# cat /tmp/outputfor
1420570929.358650253
Device "rhevm" does not exist.
1420570929.515283835
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570929.637378901
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570929.749372294
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570929.864340377
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570929.981432380
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.099390643
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.217372189
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.333403447
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.451422059
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.559433046
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.667371209
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.775363654
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.883406517
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570930.993367449
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.101403031
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.208364686
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.316414224
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.424318826
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.532419353
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.640471999
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.750347481
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.858363510
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570931.966383979
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570932.076314580
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570932.191580231
6: rhevm: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
1420570932.329932024
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570932.453889551
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570932.568408564
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570932.683055035
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570932.787616819
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570932.889669482
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570932.995472242
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.100998930
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.204175277
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.307511963
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.410922610
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.514366996
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.617703485
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.720086193
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.822801506
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570933.924717192
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570934.028135602
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link tentative
       valid_lft forever preferred_lft forever
1420570934.132003323
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420570934.240223950
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420570934.349373528
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420570934.457384999
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420570934.569413359
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420570934.677376788
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever
1420570934.785392833
6: rhevm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:84:de:3d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.170/24 brd 192.168.122.255 scope global rhevm
    inet6 fe80::5054:ff:fe84:de3d/64 scope link
       valid_lft forever preferred_lft forever


# ifconfig 
eth0      Link encap:Ethernet  HWaddr 52:54:00:84:DE:3D  
          inet6 addr: fe80::5054:ff:fe84:de3d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:413 errors:0 dropped:0 overruns:0 frame:0
          TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:19924 (19.4 KiB)  TX bytes:3262 (3.1 KiB)
          Interrupt:11 Base address:0x4000 

eth1      Link encap:Ethernet  HWaddr 52:54:00:B5:36:58  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1831 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1187 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:168232 (164.2 KiB)  TX bytes:605805 (591.6 KiB)
          Interrupt:10 Base address:0xe000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1168 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1168 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:215569 (210.5 KiB)  TX bytes:215569 (210.5 KiB)

nfs       Link encap:Ethernet  HWaddr 52:54:00:B5:36:58  
          inet addr:192.168.122.178  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:feb5:3658/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1782 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1155 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:165570 (161.6 KiB)  TX bytes:603557 (589.4 KiB)

rhevm     Link encap:Ethernet  HWaddr 52:54:00:84:DE:3D  
          inet addr:192.168.122.170  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe84:de3d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:369 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:17488 (17.0 KiB)  TX bytes:636 (636.0 b)

Comment 34 Dan Kenigsberg 2015-01-07 11:03:42 UTC
This log confirms Douglas original idea: rhevm bridge is reported as UP only 2.81 seconds AFTER start_needed_srv finishes. It takes rhevm bridge 1.7 more seconds to obtain its IP address (4.51 seconds in total).

At the moment, I don't know how this is to be solved, short of waiting 5 seconds before starting the upgrade (which is against my never-dream-to-solve-a-race-with-a-sleep religion).

Comment 35 Michal Skrivanek 2015-01-07 13:23:49 UTC
(In reply to Dan Kenigsberg from comment #34)
> At the moment, I don't know how this is to be solved, short of waiting 5
> seconds before starting the upgrade (which is against my
> never-dream-to-solve-a-race-with-a-sleep religion).

well, async wait monitoring the rhevm bridge IP every, say, 1s...? If this is on upgrade it shouldn't hurt so much, we have worse...(why does the mass VM recovery on vdsm startup comes to my mind all the time?:-)

Comment 36 Yaniv Lavi 2015-01-08 13:19:35 UTC
For now removing for RC blocker list because this only affects VMs.
Aharon, can you please try to recreate this on real HW?



Yaniv

Comment 37 Scott Herold 2015-01-12 13:47:02 UTC
(In reply to Yaniv Dary from comment #36)
> For now removing for RC blocker list because this only affects VMs.
> Aharon, can you please try to recreate this on real HW?
> 
> 
> 
> Yaniv

This was originally reported against engine running on a standalone physical host

Comment 38 Martin Pavlik 2015-01-13 09:33:06 UTC
@Yaniv

as Scott points out in comment 37, this issue occurs on regular physical HW, so no it does not effect only VMs. It can hit everyone who will upgrade RHEV-H from 3.4 to 3.5 version. Please put RC blocker back.

Comment 39 Yaniv Lavi 2015-01-14 06:49:08 UTC
I see the patches are merged, should this move to MODIFIED?

Comment 40 Michal Skrivanek 2015-01-14 08:27:52 UTC
missing downstream backport and ack

Comment 42 Michael Burman 2015-02-03 10:30:32 UTC
- Followed Douglas steps from comment 16.

- Started with:
Red Hat Enterprise Virtualization Hypervisor release 6.5 (20141017.0.el6ev)
vdsm-4.14.17-1.el6ev.x86_64
on vt13.9

- Configured static ip's on rhevm(eth4) and on_qa_rhevh(eth0)

- Upgraded to:
rhev-hypervisor6.noarch 0:6.6-20150128.0.el6ev
vdsm-4.16.8.1-6.el6ev.x86_64

static ip's are kept after upgrade and host reboot. Host is up.

cat /etc/sysconfig/network-scripts/ifcfg-rhevm
# Generated by VDSM version 4.16.8.1-6.el6ev
DEVICE=rhevm
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=yes
IPADDR=10.35.128.9
NETMASK=255.255.255.0
GATEWAY=10.35.128.254
BOOTPROTO=none
MTU=1500
DEFROUTE=yes
NM_CONTROLLED=no
HOTPLUG=no


cat /etc/sysconfig/network-scripts/ifcfg-on_qa_rhevh
# Generated by VDSM version 4.16.8.1-6.el6ev
DEVICE=on_qa_rhevh
TYPE=Bridge
DELAY=0
STP=off
ONBOOT=no
IPADDR=5.5.5.5
NETMASK=255.255.255.0
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
HOTPLUG=no

Verified on - 3.5.0-0.31.el6ev

Comment 44 Eyal Edri 2015-02-15 09:14:37 UTC
bugs were moved by ERRATA to RELEASE PENDING bug not closed probably due to errata error.
closing as 3.5.0 is released.