Bug 1058257 - RHEV-H 20140112.0 - "Networking is not configured" warning
Summary: RHEV-H 20140112.0 - "Networking is not configured" warning
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node-plugin-vdsm
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: 3.4.0
Assignee: Douglas Schilling Landgraf
QA Contact: Martin Pavlik
URL:
Whiteboard: node
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-27 11:03 UTC by Evgheni Dereveanchin
Modified: 2019-04-28 10:43 UTC (History)
22 users (show)

Fixed In Version: ovirt-node-plugin-vdsm-0.1.1-14.el6ev
Doc Type: Bug Fix
Doc Text:
Previously, for certain network configurations, the Red Hat Enterprise Virtualization Hypervisor administration interface reported that networking was not configured, even if networking was functional. Now, all working interfaces are marked correctly as "Managed".
Clone Of:
Environment:
Last Closed: 2014-06-09 14:26:19 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0673 0 normal SHIPPED_LIVE ovirt-node-plugin-vdsm bug fix and enhancement update 2014-06-09 18:24:50 UTC
oVirt gerrit 24417 0 None None None Never

Description Evgheni Dereveanchin 2014-01-27 11:03:11 UTC
Description of problem:
For certain network configurations, the new RHEV-H image reports "Networking is not configured" in "Logging", "Kdump", "Remote Storage" and "RHN Registration" sections of the admin interface even though networking is operating properly

Version-Release number of selected component (if applicable):
rhev-hypervisor6-6.5-20140112.0.iso


Steps to Reproduce:
1) install RHEV-H from rhev-hypervisor6-6.5-20140112.0.iso, assign 4 Ethernet adapters.
2) boot the hypervisor, configure networking on eth0
3) enable SSH from the Security menu
4) connect host to manager
5) configure networks the following way:

eth0 |
      >- bond0 - vlan101 - rhevm (non VM network)
eth2 |


eth1 |
      >- bond1 - vlan102 - VMs
eth3 |

Actual results:
Networking works, however interface reports "networking not configured"

Expected results:
Networking works and no warning is displayed

Additional info:
This also applies to upgraded configurations, in previous RHEV-H releases no warning was displayed for the same network config (vlan on top of bond for rhev-m)

Comment 3 Evgheni Dereveanchin 2014-02-06 08:33:28 UTC
The issue is still present in recently released rhev-hypervisor6-6.5-20140121.0.el6ev

Do we need any additional info to move this forward?

Comment 4 Fabian Deutsch 2014-02-08 12:09:28 UTC
Hey Evgheni,
I'm moving this to the vdsm plugin, which is involved when RHEV-M is used.

Is this fixed when you once enter the RHEV-M page in the TUI and go back to the status page afterwards?

Comment 5 Fabian Deutsch 2014-02-08 12:09:57 UTC
Douglas,

it seems that somehow the sync_mgmt() is not called or not working correctly. Can you reproduce this bug?

Comment 8 Evgheni Dereveanchin 2014-02-12 11:21:05 UTC
Here's how I'm able to reproduce this:
1) create a RHEV-M datacenter with "rhevm" network on a VLAN and not being a VM network, and a new VM network on a different VLAN
2) get a new hypervisor machine, assign 4 NICs to it (2 might be enough as well)
3) connect all ports to network trunks
4) install RHEV-H v3.3 (tested on rhev-hypervisor6 20140112.0)
5) configure eth0 on the RHEV-H to receive IP on a VLAN
6) attach it to the RHEV-M
7) on RHEV-M approve the new node
8) press "setup host networks" and unite pairs of adapters into bonds, place rhevm on top of one bond and the VM network on the second bond
9) apply changes, ensure the node is still in "up" state after these actions
10) check RHEV-H portal to see the warnings.

Comment 13 Fabian Deutsch 2014-02-12 13:43:44 UTC
Hey Antoni,

can you tell if it's nowaday possible to use any name for the rhevm bridge?

And if so - when was this introduced?
We were still relying that the name was either rhevm or ovirtmgmt.

Comment 17 Antoni Segura Puimedon 2014-02-12 14:46:24 UTC
Hi Fabian,

It is not possible right now. The rhevm network will use "rhevm" bridge if it is configured as a bridged network. If it is configured as a bridgeless network, then in the configuration in the bug description there wouldn't be a bridge, just nics, a bond over them, vlan 101 over the bond and finally the libvirt network definition pointing to the vlan device.

You can't rely on a bridge existing for any network (except the networks for VM traffic). But if they do exist they still are named after the network name (though that may change in the future).

Comment 18 Fabian Deutsch 2014-02-12 15:04:27 UTC
Hey Antoni,

thanks for those informations.

The reason why Node want's to know about the "management" interface, is because it wants to display some basic informations about that nic. e.g. if it's "linked" and it's ipaddress.

Can some vdsm command be used to determin the NIC which is actually used for the management?

Comment 19 Antoni Segura Puimedon 2014-02-12 15:15:01 UTC
Sure thing!

You have two ways of knowing that:

Python level:
    In [1]: from vdsm import netinfo

    In [2]: netinfo.networks()
    Out[2]: 
    {'foo': {'bridged': False, 'iface': u'p1p2.27'},
     'ovirtmgmt': {'bridge': u'ovirtmgmt', 'bridged': True}}

in this bug case, you'd have that rhevm is like foo. bridged = False and iface= (the name of the vlan).

cmdline level:
    call vdsClient -s 0 getVdsCaps
    and parse the network information


Needless to say that IMHO the right way to go about it is the Python level.

Comment 20 Fabian Deutsch 2014-02-12 15:24:48 UTC
Hey Antoni,

that looks good :)

But how can we tell what NIC is used for management when there s no bridge, like described in comment 14?

Comment 21 Antoni Segura Puimedon 2014-02-12 16:13:03 UTC
Well, what you should be checking is the device pointed at by the 'iface'
property for bridge-less networks and to 'bridge' for bridged networks (you
can check if you should check for one or the other with the 'bridged' key.

If you still want to know to which physical device the vlan or bond match,
you could probably do:

from vdsm import netinfo
import os

def phys_dev(device):
    bonds = netinfo.bondings()
    bridges = netinfo.bridges()
    vlans = netinfo.vlans()
    nics = netinfo.nics()
    while not os.path.exists(os.path.join('/sys/class/net', device, 'device')):
        if device in vlans:
            device = netinfo.getVlanDevice(device)
        elif device in bonds:
            device = open(os.path.join('/sys/class/net', device, 'bonding/active_slave')).read().strip()
        elif device in bridges:
            device = [device for device in netinfo.ports(device) if
                      any((group for group in (bonds, vlans, nics) if
                           device in group))][0]
    return device

nets = netinfo.networks()
device = nets['rhevm'].get('iface', nets['rhevm']['bridge'])
carrier_dev = phys_dev(device)

Comment 22 Douglas Schilling Landgraf 2014-02-13 06:08:03 UTC
Thanks Antoni and Fabian. I have sent a patch to review based on your input and my tests.

Comment 23 Fabian Deutsch 2014-02-13 11:27:41 UTC
Douglas,

were you able to reproduce the problem? And Did you test your patch, if you did please provide the steps you did.

Comment 24 Douglas Schilling Landgraf 2014-02-13 19:35:27 UTC
(In reply to Fabian Deutsch from comment #23)
> Douglas,
> 
> were you able to reproduce the problem? 
No, I used Evgheni's machine. 

> And Did you test your patch, if you
> did please provide the steps you did.

I tested last night but looks like I was asleep since it required an improve.
I have updated the patch, please take a look.

I have updated engine_page.py (based on the patch) on machines, in this case:
- Evgheni's machine
- Local virtual machine with RHEVM 
- Local virtual machine with ovirt node

The status page now works, the network status is now "Connected" and not "Unknown".

Comment 26 Evgheni Dereveanchin 2014-02-14 09:22:33 UTC
I can confirm the networking config is detected correctly with the patch - the "status" tab lists the correct IP and no warnings are displayed in "Logging", "Kdump", "RHN" sections of the admin interface.

The "Network" section lists all devices as "Unconfigured" though. Maybe that's expected on a working hypervisor, but when I selected "eth0" in the list and pressed "Enter", the interface crashed with error:

An error appeared in the UI: UnknownNicError("Unknown network interface: 'eth0'",)
Press ENTER to logout ...
or enter 's' to drop to shell

With all other interfaces it works as expected displaying "Networking configuration detected an already configured NIC".

Comment 27 Douglas Schilling Landgraf 2014-02-14 11:03:16 UTC
(In reply to Evgheni Dereveanchin from comment #26)
> I can confirm the networking config is detected correctly with the patch -
> the "status" tab lists the correct IP and no warnings are displayed in
> "Logging", "Kdump", "RHN" sections of the admin interface.
> 
Cool!

> The "Network" section lists all devices as "Unconfigured" though. Maybe
> that's expected on a working hypervisor, but when I selected "eth0" in the
> list and pressed "Enter", the interface crashed with error:
> 
> An error appeared in the UI: UnknownNicError("Unknown network interface:
> 'eth0'",)
> Press ENTER to logout ...
> or enter 's' to drop to shell

I do believe we need a different bugzilla for this one under ovirt-node. Fabian?
> 
> With all other interfaces it works as expected displaying "Networking
> configuration detected an already configured NIC".
Ok.

Comment 28 Evgheni Dereveanchin 2014-02-14 12:19:54 UTC
@Douglas

Agreed about the interface crash: I did not test this before the patch, so cannot confirm that it broke something. I tried it on a different hypervisor with 4 NICs but without VLAN/bond configuration and did not reproduce this. Will try on an unpatched RHEV-H 20140121.0 and if I catch it - a new BZ will be logged.

Comment 29 Evgheni Dereveanchin 2014-02-14 14:36:01 UTC
BZ#1065385 created for the UI crash when trying to edit eth0. I reproduced it on a non-patched hypervisor.

Comment 30 Fabian Deutsch 2014-02-14 17:00:35 UTC
Evgheni, thanks for the new bug. We need to seehow we handle this. But more on the bug itself.

Comment 39 Douglas Schilling Landgraf 2014-02-25 14:51:30 UTC
Hi Martin, 

Please provide more details/steps about "FailedQA". What test did you execute? Is it like Evgheni's environment? Screenshots?

Thanks!

Comment 40 Martin Pavlik 2014-02-25 15:10:44 UTC
(In reply to Douglas Schilling Landgraf from comment #39)
> Hi Martin, 
> 
> Please provide more details/steps about "FailedQA". What test did you
> execute? Is it like Evgheni's environment? Screenshots?
> 
> Thanks!

Oh, my bad, it was recreation on Fabians statement (see comment 38) that it is ok to have it assigned until there is either ovirt-node or rhevh build to test it.

Or is there a rhevh/ovirt-node image which contains the fixed package?

Comment 41 Douglas Schilling Landgraf 2014-02-25 19:25:58 UTC
(In reply to Martin Pavlik from comment #40)
> (In reply to Douglas Schilling Landgraf from comment #39)
> > Hi Martin, 
> > 
> > Please provide more details/steps about "FailedQA". What test did you
> > execute? Is it like Evgheni's environment? Screenshots?
> > 
> > Thanks!
> 
> Oh, my bad, it was recreation on Fabians statement (see comment 38) that it
> is ok to have it assigned until there is either ovirt-node or rhevh build to
> test it.

Ok,np.
> 
> Or is there a rhevh/ovirt-node image which contains the fixed package?

Better ask fabian about it.

Comment 47 Martin Pavlik 2014-04-02 12:42:18 UTC
verified Red Hat Enterprise Virtualization Hypervisor release 6.5 (20140320.0.el6ev)

Comment 50 errata-xmlrpc 2014-06-09 14:26:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0673.html

Comment 51 Robert Scheck 2014-07-18 15:15:17 UTC
Using ovirt-node-plugin-vdsm-0.1.1-24.el6ev this issue still exists: The
networking is marked as configured as long as I do not register to any
oVirt engine. Once I register to an oVirt engine, networking just switches
to unconfigured - even the system has network, can ping, connect via SSH
to it etc. There is no bonding in use however, just em1 for some testing.

Comment 54 Ying Cui 2014-07-21 07:51:37 UTC
(In reply to Robert Scheck from comment #51)
> Using ovirt-node-plugin-vdsm-0.1.1-24.el6ev this issue still exists: The
> networking is marked as configured as long as I do not register to any
> oVirt engine. Once I register to an oVirt engine, networking just switches
> to unconfigured - even the system has network, can ping, connect via SSH
> to it etc. There is no bonding in use however, just em1 for some testing.

The comment 51 issue is the same as vdsm regression bug 1120049(rhev-hypervisor6-6.5-20140715.0, ovirt-node-plugin-vdsm-0.1.1-24.el6ev, vdsm-4.14.7-7.el6ev.x86_64)
Register to RHEVM failed, networking switched to unconfigured, but can ping, can ssh. vdsm daemon is not running.

Comment 55 Robert Scheck 2014-07-21 09:03:16 UTC
(In reply to Ying Cui from comment #54)
> The comment 51 issue is the same as vdsm regression bug
> 1120049(rhev-hypervisor6-6.5-20140715.0,
> ovirt-node-plugin-vdsm-0.1.1-24.el6ev, vdsm-4.14.7-7.el6ev.x86_64)
> Register to RHEVM failed, networking switched to unconfigured, but can ping,
> can ssh. vdsm daemon is not running.

Is there a separate RHBZ for this? Any workaround known? From what it feels to
me I have to reinstall the RHEV-H system to change this situation (really?!).

Comment 56 Ying Cui 2014-07-21 09:20:06 UTC
(In reply to Robert Scheck from comment #55)
> (In reply to Ying Cui from comment #54)
> > The comment 51 issue is the same as vdsm regression bug
> > 1120049(rhev-hypervisor6-6.5-20140715.0,
> > ovirt-node-plugin-vdsm-0.1.1-24.el6ev, vdsm-4.14.7-7.el6ev.x86_64)
> > Register to RHEVM failed, networking switched to unconfigured, but can ping,
> > can ssh. vdsm daemon is not running.
> 
> Is there a separate RHBZ for this? Any workaround known? From what it feels
> to
> me I have to reinstall the RHEV-H system to change this situation (really?!).

RHBZ vdsm regression bug 1120049, the rhevh register to rhevm failed, but you can add/new your rhevh host from rhevm webadmine portal, this can achieve rhevh is managed by RHEVM side.

detail step:
1. clean install rhevh.
2. set rhevh network configured.
3. TUI configuration: ovirt-engine, set optional password of adding node.
3. Access rhevm webadmin portal.
4. Create DC, cluster
5. New Host.

waiting a while, the host can be added and up successful.

Thanks
Ying

Comment 57 Fabian Deutsch 2014-07-21 09:22:33 UTC
Ying,  thanks for checking this so quick.


Note You need to log in before you can comment on or make changes to this bug.