Bug 1088875 - Miss "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" after register to rhevm is35.1 with Data Centers 3.3 + 3.3 Cluster.
Summary: Miss "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" after register to rhev...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node-plugin-vdsm
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-3.6.0-rc3
: 3.6.0
Assignee: Douglas Schilling Landgraf
QA Contact: cshao
URL:
Whiteboard:
: 1152446 (view as bug list)
Depends On:
Blocks: 1219813
TreeView+ depends on / blocked
 
Reported: 2014-04-17 11:02 UTC by haiyang,dong
Modified: 2016-03-09 14:12 UTC (History)
18 users (show)

Fixed In Version: ovirt-node-plugin-vdsm-0.6.1-1.el7ev
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1219813 (view as bug list)
Environment:
Last Closed: 2016-03-09 14:12:35 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0378 0 normal SHIPPED_LIVE ovirt-node bug fix and enhancement update for RHEV 3.6 2016-03-09 19:06:36 UTC
oVirt gerrit 32148 0 master MERGED engine_page: use augeas instead of Management() Never
oVirt gerrit 32832 0 ovirt-3.5 MERGED engine_page: use augeas instead of Management() Never

Description haiyang,dong 2014-04-17 11:02:13 UTC
Description of problem:
Clean Install rhev-hypervisor6-6.5-20140407.0.el6ev, Configure network and register to rhevm is35.1 with Data Centers 3.3 + 3.3 Cluster.
After register success,check that miss "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt"

No this issue when register to rhevm  is35(Red Hat Enterprise Virtualization Manager Version: 3.3.2-0.49.el6ev) with Data Centers 3.3 + 3.3 Cluster.
so it's a regression bug.

Version-Release number of selected component (if applicable):
Red Hat Enterprise Virtualization Manager Version: 3.3.2-0.50.el6ev(is35.1)
rhev-hypervisor6-6.5-20140407.0.el6ev.noarch.rpm
ovirt-node-3.0.1-18.el6_5.8.noarch
vdsm-4.13.2-0.13.el6ev



How reproducible:
100%

Steps to Reproduce:


Actual results:

Expected results:

Additional info:

Comment 1 Dan Kenigsberg 2014-04-23 06:59:55 UTC
what is the functional problem of not having MANAGED_IFNAMES defined?
Vdsm proper is unaware (and should keep being unaware) of MANAGED_IFNAMES.

Comment 2 haiyang,dong 2014-04-23 07:23:42 UTC
(In reply to Dan Kenigsberg from comment #1)
> what is the functional problem of not having MANAGED_IFNAMES defined?
> Vdsm proper is unaware (and should keep being unaware) of MANAGED_IFNAMES.

From the the follow code of ovirt.node.config.defaults:
    def has_managed_ifnames(self):
        return True if self.retrieve()["managed_ifnames"] else False

ovirt-node used "MANAGED_IFNAMES " was defined or not in "/etc/default/ovirt"
to check the network of rhevh was managed by rhevm or not.

if missed this functional, similar Bug like Bug 1073046 - RHEV-H: An error appeared in the UI: UnknownNicError("Unknown network interface: 'eth0'",)
will reproduce again.

Comment 3 Fabian Deutsch 2014-04-30 16:20:58 UTC
Dan,

that variable in the default/ovirt file is used to signal the steup TUI that the Node is managed, and that informations abut those NICs (ifnames) shall be displayed.

Comment 4 Dan Kenigsberg 2014-05-07 11:46:56 UTC
Douglas, does ovirt-node-plugin-vdsm modify /etc/default/ovirt in any way?

Comment 5 haiyang,dong 2014-05-14 10:27:04 UTC
If we missed "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", the functions "Logging/kdump/nfsv4 domain/RHN" pages of rhevh setup menu will be unconfigured.

workaround: after register rhevm, enter into "oVirt Engine" page of rhevh setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into "/etc/default/ovirt".

Comment 6 haiyang,dong 2014-05-14 10:31:16 UTC
(In reply to haiyang,dong from comment #5)
> If we missed "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", the
> functions "Logging/kdump/nfsv4 domain/RHN" pages of rhevh setup menu will be
> unconfigured.
> 
> workaround: after register rhevm, enter into "oVirt Engine" page of rhevh
> setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into
> "/etc/default/ovirt".

Test version:
rhev-hypervisor6-6.5-20140513.0
ovirt-node-plugin-vdsm-0.1.1-19.el6ev.noarch
ovirt-node-3.0.1-18.el6_5.10.noarch
rhevm av 9.1

Comment 7 Fabian Deutsch 2014-05-14 11:01:42 UTC
Douglas,

IIRC the code was designed to set that key if necessary when the TUI ist started. This might be going wrong.

Comment 8 Douglas Schilling Landgraf 2014-07-02 21:37:38 UTC
(In reply to haiyang,dong from comment #6)
> (In reply to haiyang,dong from comment #5)
> > If we missed "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", the
> > functions "Logging/kdump/nfsv4 domain/RHN" pages of rhevh setup menu will be
> > unconfigured.
> > 
> > workaround: after register rhevm, enter into "oVirt Engine" page of rhevh
> > setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into
> > "/etc/default/ovirt".
> 
> Test version:
> rhev-hypervisor6-6.5-20140513.0
> ovirt-node-plugin-vdsm-0.1.1-19.el6ev.noarch
> ovirt-node-3.0.1-18.el6_5.10.noarch
> rhevm av 9.1

Hello haiyang,dong,

Can you please try with the last 3.4 available that we have: https://brewweb.devel.redhat.com/buildinfo?buildID=364523

What happens if you do:

-> Setup Network
-> Register Node (Host is UP)
-> Move to tab above of the registration
-> Move back to Registration tab
-> Press F2 and check /etc/default/ovirt

@Fabian, 

I see we have some config wrappers inside ovirt-node to manage .conf files.

In vdsm plugin we use to update the interface:

    mgmt = Management()
    mgmt.update(engine_data, mgmtIface, None)

I am wondering if we could add the flush call after the config write in ovirt-node side to empty the buffer and update asap into the file.

Comment 9 haiyang,dong 2014-07-03 05:38:24 UTC
(In reply to Douglas Schilling Landgraf from comment #8)

> 
> Hello haiyang,dong,
> 
> Can you please try with the last 3.4 available that we have:
> https://brewweb.devel.redhat.com/buildinfo?buildID=364523
> 
> What happens if you do:
> 
> -> Setup Network
> -> Register Node (Host is UP)
> -> Move to tab above of the registration
> -> Move back to Registration tab
> -> Press F2 and check /etc/default/ovirt
> 
After step 5, i check that "MANAGED_IFNAMES="rhevm"" will be added into
"/etc/default/ovirt". Seems add a flush call could resolve this issue.

> @Fabian, 
> 
> I see we have some config wrappers inside ovirt-node to manage .conf files.
> 
> In vdsm plugin we use to update the interface:
> 
>     mgmt = Management()
>     mgmt.update(engine_data, mgmtIface, None)
> 
> I am wondering if we could add the flush call after the config write in
> ovirt-node side to empty the buffer and update asap into the file.

Comment 10 Fabian Deutsch 2014-08-01 10:56:23 UTC
(In reply to haiyang,dong from comment #9)
> (In reply to Douglas Schilling Landgraf from comment #8)
> 
> > 
> > Hello haiyang,dong,
> > 
> > Can you please try with the last 3.4 available that we have:
> > https://brewweb.devel.redhat.com/buildinfo?buildID=364523
> > 
> > What happens if you do:
> > 
> > -> Setup Network
> > -> Register Node (Host is UP)
> > -> Move to tab above of the registration
> > -> Move back to Registration tab
> > -> Press F2 and check /etc/default/ovirt
> > 
> After step 5, i check that "MANAGED_IFNAMES="rhevm"" will be added into
> "/etc/default/ovirt". Seems add a flush call could resolve this issue.
> 
> > @Fabian, 
> > 
> > I see we have some config wrappers inside ovirt-node to manage .conf files.
> > 
> > In vdsm plugin we use to update the interface:
> > 
> >     mgmt = Management()
> >     mgmt.update(engine_data, mgmtIface, None)
> > 
> > I am wondering if we could add the flush call after the config write in
> > ovirt-node side to empty the buffer and update asap into the file.

Each update call directly flushes the values to disk.
What could the cause here is that a parallel augeas instance was "caching" the old result. The value of the augeas instance then overwrote the new value of the update call, when it got destructed.

Comment 12 Douglas Schilling Landgraf 2014-08-29 01:19:15 UTC
(In reply to Fabian Deutsch from comment #10)
> (In reply to haiyang,dong from comment #9)
> > (In reply to Douglas Schilling Landgraf from comment #8)
> > 
> > > 
> > > Hello haiyang,dong,
> > > 
> > > Can you please try with the last 3.4 available that we have:
> > > https://brewweb.devel.redhat.com/buildinfo?buildID=364523
> > > 
> > > What happens if you do:
> > > 
> > > -> Setup Network
> > > -> Register Node (Host is UP)
> > > -> Move to tab above of the registration
> > > -> Move back to Registration tab
> > > -> Press F2 and check /etc/default/ovirt
> > > 
> > After step 5, i check that "MANAGED_IFNAMES="rhevm"" will be added into
> > "/etc/default/ovirt". Seems add a flush call could resolve this issue.
> > 
> > > @Fabian, 
> > > 
> > > I see we have some config wrappers inside ovirt-node to manage .conf files.
> > > 
> > > In vdsm plugin we use to update the interface:
> > > 
> > >     mgmt = Management()
> > >     mgmt.update(engine_data, mgmtIface, None)
> > > 
> > > I am wondering if we could add the flush call after the config write in
> > > ovirt-node side to empty the buffer and update asap into the file.
> 
> Each update call directly flushes the values to disk.
> What could the cause here is that a parallel augeas instance was "caching"
> the old result. The value of the augeas instance then overwrote the new
> value of the update call, when it got destructed.

Howdy Fabian, using directly augeas in the plugin it worked correctly. Please let me know your thoughts in the gerrit.

Thanks!

Comment 14 Ying Cui 2014-09-09 07:49:22 UTC
Fabian,
1. If not fix it in 3.5, we will encounter bug 1073046, and some rhevh TUI menu can not be configured because network is unconfigured.
2. Although there is workaround as comment 5, after register rhevm, enter into "oVirt Engine" page of rhevh setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into "/etc/default/ovirt".

here I still suggest to fix it in ovirt-node firstly, if augeas side fix in 3.5.1, ovirt-node can roll back, open another new bug to trace this ovirt-node roll-back till augeas fix.

Comment 18 Martin Pavlik 2014-09-25 09:09:38 UTC
Is there RHEV-H image on which this can be tested?

Comment 20 Ying Cui 2014-12-03 11:55:24 UTC
For 3ack+ process, let me help to provide the qa_ack+ on ovirt-node-plugin-vdsm component.

Comment 21 Ryan Barry 2015-01-06 14:54:47 UTC
*** Bug 1152446 has been marked as a duplicate of this bug. ***

Comment 22 haiyang,dong 2015-01-26 11:27:27 UTC
Test version:
rhev-hypervisor7-7.0-20150123.2.iso
ovirt-node-3.2.1-6.el7.noarch
Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.30.el6ev
ovirt-node-plugin-vdsm-0.2.0-18.el7ev.noarch

Test steps:
1. Setup Network
2. Register Node (Host is UP)
3. Move to tab above of the registration
4. Don't move back to Registration tab
5. Press F2 and check /etc/default/ovirt

Test result:
Still miss "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" after register rhevh to rhevm.

workaround:
1. Setup Network
2. Register Node (Host is UP)
3. Move to tab above of the registration
4. Move back to Registration tab
5. Press F2 and check /etc/default/ovirt

Due to still need workaround methods to add "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", so this bug wasn't fixed. Need assigned it again.

Comment 23 Ying Cui 2015-01-26 12:23:11 UTC
hadong, do we still encounter this bug on rhevh 6.6 for 3.4.z?
Basic on my understand of this bug, if with this issue, we will encounter the bug 1073046, network is unconfigured and some rhevh TUI menu can not be configured.
Any other impact on this bug?

Comment 24 Fabian Deutsch 2015-01-26 12:30:15 UTC
Haiyang, what happens if you logout and back in again?

Comment 25 haiyang,dong 2015-01-26 12:38:57 UTC
(In reply to Fabian Deutsch from comment #24)
> Haiyang, what happens if you logout and back in again?

will add "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" after logout and back login again

Comment 26 Fabian Deutsch 2015-01-26 13:04:37 UTC
I'd suggest that we add a release note, notifying about this issue. The recommended workaround is to log out and back in again.

Comment 35 Fabian Deutsch 2015-05-21 15:03:28 UTC
*** Bug 1219813 has been marked as a duplicate of this bug. ***

Comment 38 cshao 2015-10-09 09:12:41 UTC
Test version:
rhev-hypervisor7-7.2-20150928.0
ovirt-node-3.3.0-0.10.20150928gite7ee3f1.el7ev.noarch
ovirt-node-plugin-vdsm-0.6.1-1.el7ev.noarch
vdsm-4.17.8-1.el7ev.noarch
RHEV-M 3.6.0-0.18.el6

Test steps:
1. Install RHEV-H via PXE.
2. Register RHEV-H to RHEV-M(DC 3.6).
3. Approve RHEV-H to up status.
4. RHEV-H side: Move to tab above of the registration
5. Don't move back to Registration tab
6. Press F2 and check /etc/default/ovirt


Test result:
After step3, In RHEV-H side, Networking show as Unknown.

# cat /etc/default/ovirt 
MANAGED_IFNAMES="ovirtmgmt"
OVIRT_BOOTIF="enp63s0"
OVIRT_BOOTPARAMS="ksdevice=bootif rd.dm=0 rd.md=0 crashkernel=256M lang= max_loop=256 rhgb quiet elevator=deadline rd.live.check rd.luks=0 rd.live.image nomodeset"
OVIRT_BOOTPROTO="dhcp"
OVIRT_CONFIG_VERSION="3.3.0"
OVIRT_FIRSTBOOT="0"
OVIRT_HOSTED_ENGINE_IMAGE_PATH=""
OVIRT_HOSTED_ENGINE_PXE="yes"
OVIRT_INIT=""
OVIRT_INSTALL=""
OVIRT_INSTALL_ROOT="y"
OVIRT_KDUMP_LOCAL="true"
OVIRT_KEYBOARD_LAYOUT="us"
OVIRT_MANAGEMENT_SERVER=""
OVIRT_NODE_REGISTER="True"
OVIRT_ROOT_INSTALL="y"
OVIRT_SSH_PORT="22"
OVIRT_SSH_PWAUTH="yes"
OVIRT_STANDALONE="1"
OVIRT_UPGRADE=""
OVIRT_USE_STRONG_RNG=""
OVIRT_VOL_CONFIG_SIZE="5111"
OVIRT_VOL_DATA_SIZE="281404"
OVIRT_VOL_EFI_SIZE="256"
OVIRT_VOL_LOGGING_SIZE="2048"
OVIRT_VOL_ROOT_SIZE="4300"
OVIRT_VOL_SWAP_SIZE="7826"
OVIRT_MANAGEMENT_PORT="None"

I have to assigned this bug due to MANAGED_IFNAMES=show as "ovirtmgmt".

Comment 40 Sandro Bonazzola 2015-10-26 12:40:46 UTC
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015.
Please review this bug and if not a blocker, please postpone to a later release.
All bugs not postponed on GA release will be automatically re-targeted to

- 3.6.1 if severity >= high
- 4.0 if severity < high

Comment 41 Douglas Schilling Landgraf 2015-11-03 02:00:08 UTC
Hi shaochen,

(In reply to shaochen from comment #38)
> Test version:
> rhev-hypervisor7-7.2-20150928.0
> ovirt-node-3.3.0-0.10.20150928gite7ee3f1.el7ev.noarch
> ovirt-node-plugin-vdsm-0.6.1-1.el7ev.noarch
> vdsm-4.17.8-1.el7ev.noarch
> RHEV-M 3.6.0-0.18.el6
> 
> Test steps:
> 1. Install RHEV-H via PXE.
> 2. Register RHEV-H to RHEV-M(DC 3.6).
> 3. Approve RHEV-H to up status.
> 4. RHEV-H side: Move to tab above of the registration
> 5. Don't move back to Registration tab
> 6. Press F2 and check /etc/default/ovirt
> 
> 
> Test result:
> After step3, In RHEV-H side, Networking show as Unknown.
> 
> # cat /etc/default/ovirt 
> MANAGED_IFNAMES="ovirtmgmt"
> OVIRT_BOOTIF="enp63s0"
> OVIRT_BOOTPARAMS="ksdevice=bootif rd.dm=0 rd.md=0 crashkernel=256M lang=
> max_loop=256 rhgb quiet elevator=deadline rd.live.check rd.luks=0
> rd.live.image nomodeset"
> OVIRT_BOOTPROTO="dhcp"
> OVIRT_CONFIG_VERSION="3.3.0"
> OVIRT_FIRSTBOOT="0"
> OVIRT_HOSTED_ENGINE_IMAGE_PATH=""
> OVIRT_HOSTED_ENGINE_PXE="yes"
> OVIRT_INIT=""
> OVIRT_INSTALL=""
> OVIRT_INSTALL_ROOT="y"
> OVIRT_KDUMP_LOCAL="true"
> OVIRT_KEYBOARD_LAYOUT="us"
> OVIRT_MANAGEMENT_SERVER=""
> OVIRT_NODE_REGISTER="True"
> OVIRT_ROOT_INSTALL="y"
> OVIRT_SSH_PORT="22"
> OVIRT_SSH_PWAUTH="yes"
> OVIRT_STANDALONE="1"
> OVIRT_UPGRADE=""
> OVIRT_USE_STRONG_RNG=""
> OVIRT_VOL_CONFIG_SIZE="5111"
> OVIRT_VOL_DATA_SIZE="281404"
> OVIRT_VOL_EFI_SIZE="256"
> OVIRT_VOL_LOGGING_SIZE="2048"
> OVIRT_VOL_ROOT_SIZE="4300"
> OVIRT_VOL_SWAP_SIZE="7826"
> OVIRT_MANAGEMENT_PORT="None"
> 
> I have to assigned this bug due to MANAGED_IFNAMES=show as "ovirtmgmt".

The component which creates the network bridge name ('ovirtmgmt' or 'rhevm') is VDSM, not ovirt-node-plugin-vdsm. The ovirt-node-plugin-vdsm consults vdsm and update /etc/default/ovirt [1]. If vdsm is using ovirtmgmt it will show in /etc/default/ovirt ovirtmgmt and not rhevm. Based on this data, I am moving back to ON_QA. Please also remove FailedQA flag. You probably could re-test it using a higher/latest VDSM version with an updated/latest RHEV-H build. Please let me know if you need any additional information from my side.
 
[1] https://github.com/oVirt/ovirt-node-plugin-vdsm/blob/master/src/engine_page.py#L89

Thanks!

Comment 42 cshao 2015-11-04 08:48:59 UTC
Test version:
rhev-hypervisor7-7.2-20151025.0
ovirt-node-3.3.0-0.18.20151022git82dc52c.el7ev.noarch
vdsm-4.17.10-5.el7ev.noarch
RHEV-M 3.6.0.2-0.1.el6
ovirt-node-plugin-vdsm-0.6.1-1.el7ev.noarch

test steps:
1. Install RHEV-H via PXE.
2. Register RHEV-H to RHEV-M 3.6.
3. Approve RHEV-H to up status.
4. Press F2 and check /etc/default/ovirt

Test result:
It show ovirtmgmt in /etc/default/ovirt.

So the bug is fixed, change bug status to VERIFIED.

Comment 43 Lucy Bopf 2016-02-19 10:18:25 UTC
Moving 'requires_doc_text' to '-', as this bug is verified and no longer needs to be documented as a known issue.

Comment 45 errata-xmlrpc 2016-03-09 14:12:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0378.html


Note You need to log in before you can comment on or make changes to this bug.