Description of problem: Clean Install rhev-hypervisor6-6.5-20140407.0.el6ev, Configure network and register to rhevm is35.1 with Data Centers 3.3 + 3.3 Cluster. After register success,check that miss "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" No this issue when register to rhevm is35(Red Hat Enterprise Virtualization Manager Version: 3.3.2-0.49.el6ev) with Data Centers 3.3 + 3.3 Cluster. so it's a regression bug. Version-Release number of selected component (if applicable): Red Hat Enterprise Virtualization Manager Version: 3.3.2-0.50.el6ev(is35.1) rhev-hypervisor6-6.5-20140407.0.el6ev.noarch.rpm ovirt-node-3.0.1-18.el6_5.8.noarch vdsm-4.13.2-0.13.el6ev How reproducible: 100% Steps to Reproduce: Actual results: Expected results: Additional info:
what is the functional problem of not having MANAGED_IFNAMES defined? Vdsm proper is unaware (and should keep being unaware) of MANAGED_IFNAMES.
(In reply to Dan Kenigsberg from comment #1) > what is the functional problem of not having MANAGED_IFNAMES defined? > Vdsm proper is unaware (and should keep being unaware) of MANAGED_IFNAMES. From the the follow code of ovirt.node.config.defaults: def has_managed_ifnames(self): return True if self.retrieve()["managed_ifnames"] else False ovirt-node used "MANAGED_IFNAMES " was defined or not in "/etc/default/ovirt" to check the network of rhevh was managed by rhevm or not. if missed this functional, similar Bug like Bug 1073046 - RHEV-H: An error appeared in the UI: UnknownNicError("Unknown network interface: 'eth0'",) will reproduce again.
Dan, that variable in the default/ovirt file is used to signal the steup TUI that the Node is managed, and that informations abut those NICs (ifnames) shall be displayed.
Douglas, does ovirt-node-plugin-vdsm modify /etc/default/ovirt in any way?
If we missed "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", the functions "Logging/kdump/nfsv4 domain/RHN" pages of rhevh setup menu will be unconfigured. workaround: after register rhevm, enter into "oVirt Engine" page of rhevh setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into "/etc/default/ovirt".
(In reply to haiyang,dong from comment #5) > If we missed "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", the > functions "Logging/kdump/nfsv4 domain/RHN" pages of rhevh setup menu will be > unconfigured. > > workaround: after register rhevm, enter into "oVirt Engine" page of rhevh > setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into > "/etc/default/ovirt". Test version: rhev-hypervisor6-6.5-20140513.0 ovirt-node-plugin-vdsm-0.1.1-19.el6ev.noarch ovirt-node-3.0.1-18.el6_5.10.noarch rhevm av 9.1
Douglas, IIRC the code was designed to set that key if necessary when the TUI ist started. This might be going wrong.
(In reply to haiyang,dong from comment #6) > (In reply to haiyang,dong from comment #5) > > If we missed "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", the > > functions "Logging/kdump/nfsv4 domain/RHN" pages of rhevh setup menu will be > > unconfigured. > > > > workaround: after register rhevm, enter into "oVirt Engine" page of rhevh > > setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into > > "/etc/default/ovirt". > > Test version: > rhev-hypervisor6-6.5-20140513.0 > ovirt-node-plugin-vdsm-0.1.1-19.el6ev.noarch > ovirt-node-3.0.1-18.el6_5.10.noarch > rhevm av 9.1 Hello haiyang,dong, Can you please try with the last 3.4 available that we have: https://brewweb.devel.redhat.com/buildinfo?buildID=364523 What happens if you do: -> Setup Network -> Register Node (Host is UP) -> Move to tab above of the registration -> Move back to Registration tab -> Press F2 and check /etc/default/ovirt @Fabian, I see we have some config wrappers inside ovirt-node to manage .conf files. In vdsm plugin we use to update the interface: mgmt = Management() mgmt.update(engine_data, mgmtIface, None) I am wondering if we could add the flush call after the config write in ovirt-node side to empty the buffer and update asap into the file.
(In reply to Douglas Schilling Landgraf from comment #8) > > Hello haiyang,dong, > > Can you please try with the last 3.4 available that we have: > https://brewweb.devel.redhat.com/buildinfo?buildID=364523 > > What happens if you do: > > -> Setup Network > -> Register Node (Host is UP) > -> Move to tab above of the registration > -> Move back to Registration tab > -> Press F2 and check /etc/default/ovirt > After step 5, i check that "MANAGED_IFNAMES="rhevm"" will be added into "/etc/default/ovirt". Seems add a flush call could resolve this issue. > @Fabian, > > I see we have some config wrappers inside ovirt-node to manage .conf files. > > In vdsm plugin we use to update the interface: > > mgmt = Management() > mgmt.update(engine_data, mgmtIface, None) > > I am wondering if we could add the flush call after the config write in > ovirt-node side to empty the buffer and update asap into the file.
(In reply to haiyang,dong from comment #9) > (In reply to Douglas Schilling Landgraf from comment #8) > > > > > Hello haiyang,dong, > > > > Can you please try with the last 3.4 available that we have: > > https://brewweb.devel.redhat.com/buildinfo?buildID=364523 > > > > What happens if you do: > > > > -> Setup Network > > -> Register Node (Host is UP) > > -> Move to tab above of the registration > > -> Move back to Registration tab > > -> Press F2 and check /etc/default/ovirt > > > After step 5, i check that "MANAGED_IFNAMES="rhevm"" will be added into > "/etc/default/ovirt". Seems add a flush call could resolve this issue. > > > @Fabian, > > > > I see we have some config wrappers inside ovirt-node to manage .conf files. > > > > In vdsm plugin we use to update the interface: > > > > mgmt = Management() > > mgmt.update(engine_data, mgmtIface, None) > > > > I am wondering if we could add the flush call after the config write in > > ovirt-node side to empty the buffer and update asap into the file. Each update call directly flushes the values to disk. What could the cause here is that a parallel augeas instance was "caching" the old result. The value of the augeas instance then overwrote the new value of the update call, when it got destructed.
(In reply to Fabian Deutsch from comment #10) > (In reply to haiyang,dong from comment #9) > > (In reply to Douglas Schilling Landgraf from comment #8) > > > > > > > > Hello haiyang,dong, > > > > > > Can you please try with the last 3.4 available that we have: > > > https://brewweb.devel.redhat.com/buildinfo?buildID=364523 > > > > > > What happens if you do: > > > > > > -> Setup Network > > > -> Register Node (Host is UP) > > > -> Move to tab above of the registration > > > -> Move back to Registration tab > > > -> Press F2 and check /etc/default/ovirt > > > > > After step 5, i check that "MANAGED_IFNAMES="rhevm"" will be added into > > "/etc/default/ovirt". Seems add a flush call could resolve this issue. > > > > > @Fabian, > > > > > > I see we have some config wrappers inside ovirt-node to manage .conf files. > > > > > > In vdsm plugin we use to update the interface: > > > > > > mgmt = Management() > > > mgmt.update(engine_data, mgmtIface, None) > > > > > > I am wondering if we could add the flush call after the config write in > > > ovirt-node side to empty the buffer and update asap into the file. > > Each update call directly flushes the values to disk. > What could the cause here is that a parallel augeas instance was "caching" > the old result. The value of the augeas instance then overwrote the new > value of the update call, when it got destructed. Howdy Fabian, using directly augeas in the plugin it worked correctly. Please let me know your thoughts in the gerrit. Thanks!
Fabian, 1. If not fix it in 3.5, we will encounter bug 1073046, and some rhevh TUI menu can not be configured because network is unconfigured. 2. Although there is workaround as comment 5, after register rhevm, enter into "oVirt Engine" page of rhevh setup menu again, then "MANAGED_IFNAMES="rhevm"" will be added into "/etc/default/ovirt". here I still suggest to fix it in ovirt-node firstly, if augeas side fix in 3.5.1, ovirt-node can roll back, open another new bug to trace this ovirt-node roll-back till augeas fix.
Is there RHEV-H image on which this can be tested?
For 3ack+ process, let me help to provide the qa_ack+ on ovirt-node-plugin-vdsm component.
*** Bug 1152446 has been marked as a duplicate of this bug. ***
Test version: rhev-hypervisor7-7.0-20150123.2.iso ovirt-node-3.2.1-6.el7.noarch Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.30.el6ev ovirt-node-plugin-vdsm-0.2.0-18.el7ev.noarch Test steps: 1. Setup Network 2. Register Node (Host is UP) 3. Move to tab above of the registration 4. Don't move back to Registration tab 5. Press F2 and check /etc/default/ovirt Test result: Still miss "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" after register rhevh to rhevm. workaround: 1. Setup Network 2. Register Node (Host is UP) 3. Move to tab above of the registration 4. Move back to Registration tab 5. Press F2 and check /etc/default/ovirt Due to still need workaround methods to add "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt", so this bug wasn't fixed. Need assigned it again.
hadong, do we still encounter this bug on rhevh 6.6 for 3.4.z? Basic on my understand of this bug, if with this issue, we will encounter the bug 1073046, network is unconfigured and some rhevh TUI menu can not be configured. Any other impact on this bug?
Haiyang, what happens if you logout and back in again?
(In reply to Fabian Deutsch from comment #24) > Haiyang, what happens if you logout and back in again? will add "MANAGED_IFNAMES="rhevm"" in "/etc/default/ovirt" after logout and back login again
I'd suggest that we add a release note, notifying about this issue. The recommended workaround is to log out and back in again.
*** Bug 1219813 has been marked as a duplicate of this bug. ***
Test version: rhev-hypervisor7-7.2-20150928.0 ovirt-node-3.3.0-0.10.20150928gite7ee3f1.el7ev.noarch ovirt-node-plugin-vdsm-0.6.1-1.el7ev.noarch vdsm-4.17.8-1.el7ev.noarch RHEV-M 3.6.0-0.18.el6 Test steps: 1. Install RHEV-H via PXE. 2. Register RHEV-H to RHEV-M(DC 3.6). 3. Approve RHEV-H to up status. 4. RHEV-H side: Move to tab above of the registration 5. Don't move back to Registration tab 6. Press F2 and check /etc/default/ovirt Test result: After step3, In RHEV-H side, Networking show as Unknown. # cat /etc/default/ovirt MANAGED_IFNAMES="ovirtmgmt" OVIRT_BOOTIF="enp63s0" OVIRT_BOOTPARAMS="ksdevice=bootif rd.dm=0 rd.md=0 crashkernel=256M lang= max_loop=256 rhgb quiet elevator=deadline rd.live.check rd.luks=0 rd.live.image nomodeset" OVIRT_BOOTPROTO="dhcp" OVIRT_CONFIG_VERSION="3.3.0" OVIRT_FIRSTBOOT="0" OVIRT_HOSTED_ENGINE_IMAGE_PATH="" OVIRT_HOSTED_ENGINE_PXE="yes" OVIRT_INIT="" OVIRT_INSTALL="" OVIRT_INSTALL_ROOT="y" OVIRT_KDUMP_LOCAL="true" OVIRT_KEYBOARD_LAYOUT="us" OVIRT_MANAGEMENT_SERVER="" OVIRT_NODE_REGISTER="True" OVIRT_ROOT_INSTALL="y" OVIRT_SSH_PORT="22" OVIRT_SSH_PWAUTH="yes" OVIRT_STANDALONE="1" OVIRT_UPGRADE="" OVIRT_USE_STRONG_RNG="" OVIRT_VOL_CONFIG_SIZE="5111" OVIRT_VOL_DATA_SIZE="281404" OVIRT_VOL_EFI_SIZE="256" OVIRT_VOL_LOGGING_SIZE="2048" OVIRT_VOL_ROOT_SIZE="4300" OVIRT_VOL_SWAP_SIZE="7826" OVIRT_MANAGEMENT_PORT="None" I have to assigned this bug due to MANAGED_IFNAMES=show as "ovirtmgmt".
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high
Hi shaochen, (In reply to shaochen from comment #38) > Test version: > rhev-hypervisor7-7.2-20150928.0 > ovirt-node-3.3.0-0.10.20150928gite7ee3f1.el7ev.noarch > ovirt-node-plugin-vdsm-0.6.1-1.el7ev.noarch > vdsm-4.17.8-1.el7ev.noarch > RHEV-M 3.6.0-0.18.el6 > > Test steps: > 1. Install RHEV-H via PXE. > 2. Register RHEV-H to RHEV-M(DC 3.6). > 3. Approve RHEV-H to up status. > 4. RHEV-H side: Move to tab above of the registration > 5. Don't move back to Registration tab > 6. Press F2 and check /etc/default/ovirt > > > Test result: > After step3, In RHEV-H side, Networking show as Unknown. > > # cat /etc/default/ovirt > MANAGED_IFNAMES="ovirtmgmt" > OVIRT_BOOTIF="enp63s0" > OVIRT_BOOTPARAMS="ksdevice=bootif rd.dm=0 rd.md=0 crashkernel=256M lang= > max_loop=256 rhgb quiet elevator=deadline rd.live.check rd.luks=0 > rd.live.image nomodeset" > OVIRT_BOOTPROTO="dhcp" > OVIRT_CONFIG_VERSION="3.3.0" > OVIRT_FIRSTBOOT="0" > OVIRT_HOSTED_ENGINE_IMAGE_PATH="" > OVIRT_HOSTED_ENGINE_PXE="yes" > OVIRT_INIT="" > OVIRT_INSTALL="" > OVIRT_INSTALL_ROOT="y" > OVIRT_KDUMP_LOCAL="true" > OVIRT_KEYBOARD_LAYOUT="us" > OVIRT_MANAGEMENT_SERVER="" > OVIRT_NODE_REGISTER="True" > OVIRT_ROOT_INSTALL="y" > OVIRT_SSH_PORT="22" > OVIRT_SSH_PWAUTH="yes" > OVIRT_STANDALONE="1" > OVIRT_UPGRADE="" > OVIRT_USE_STRONG_RNG="" > OVIRT_VOL_CONFIG_SIZE="5111" > OVIRT_VOL_DATA_SIZE="281404" > OVIRT_VOL_EFI_SIZE="256" > OVIRT_VOL_LOGGING_SIZE="2048" > OVIRT_VOL_ROOT_SIZE="4300" > OVIRT_VOL_SWAP_SIZE="7826" > OVIRT_MANAGEMENT_PORT="None" > > I have to assigned this bug due to MANAGED_IFNAMES=show as "ovirtmgmt". The component which creates the network bridge name ('ovirtmgmt' or 'rhevm') is VDSM, not ovirt-node-plugin-vdsm. The ovirt-node-plugin-vdsm consults vdsm and update /etc/default/ovirt [1]. If vdsm is using ovirtmgmt it will show in /etc/default/ovirt ovirtmgmt and not rhevm. Based on this data, I am moving back to ON_QA. Please also remove FailedQA flag. You probably could re-test it using a higher/latest VDSM version with an updated/latest RHEV-H build. Please let me know if you need any additional information from my side. [1] https://github.com/oVirt/ovirt-node-plugin-vdsm/blob/master/src/engine_page.py#L89 Thanks!
Test version: rhev-hypervisor7-7.2-20151025.0 ovirt-node-3.3.0-0.18.20151022git82dc52c.el7ev.noarch vdsm-4.17.10-5.el7ev.noarch RHEV-M 3.6.0.2-0.1.el6 ovirt-node-plugin-vdsm-0.6.1-1.el7ev.noarch test steps: 1. Install RHEV-H via PXE. 2. Register RHEV-H to RHEV-M 3.6. 3. Approve RHEV-H to up status. 4. Press F2 and check /etc/default/ovirt Test result: It show ovirtmgmt in /etc/default/ovirt. So the bug is fixed, change bug status to VERIFIED.
Moving 'requires_doc_text' to '-', as this bug is verified and no longer needs to be documented as a known issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0378.html