Bug 1152916 - [3.5-7.0] Can not register to rhevm3.5 when auto install rhevh6.6/rhevh7.0 with "management_server=$RHEV-M_IP" parameter
Summary: [3.5-7.0] Can not register to rhevm3.5 when auto install rhevh6.6/rhevh7.0 wi...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node-plugin-vdsm
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: 3.5.0
Assignee: Douglas Schilling Landgraf
QA Contact: Pavol Brilla
URL:
Whiteboard: node
Depends On:
Blocks: rhevh-7.0
TreeView+ depends on / blocked
 
Reported: 2014-10-15 08:07 UTC by wanghui
Modified: 2016-02-10 20:09 UTC (History)
16 users (show)

Fixed In Version: ovirt-node-plugin-vdsm-0.2.0-14
Doc Type: Known Issue
Doc Text:
Red Hat Enterprise Virtualization Hypervisor 7.0 vdsm_reg is not started automatically and caused the Hypervisor registration to fail when an automatic installation is used. The current workaround is to manually register and attach the Red Hat Enterprise Virtualization Hypervisor 7.0 to the Red Hat Enterprise Virtualization Manager.
Clone Of:
Environment:
Last Closed: 2015-02-12 14:02:15 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
log files (118.49 KB, application/x-gzip)
2014-10-15 08:07 UTC, wanghui
no flags Details
screen shot of status page (42.89 KB, image/png)
2014-10-15 08:07 UTC, wanghui
no flags Details
screen shot of rhevm page (38.59 KB, image/png)
2014-10-15 08:08 UTC, wanghui
no flags Details
vdsm_reg_log (9.70 KB, text/plain)
2014-11-19 17:50 UTC, Douglas Schilling Landgraf
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 35356 0 master MERGED spec: enable vdsm-reg to autostart Never
oVirt gerrit 35435 0 ovirt-3.5 MERGED spec: enable vdsm-reg to autostart Never

Description wanghui 2014-10-15 08:07:06 UTC
Created attachment 947130 [details]
log files

Description of problem:
After auto install RHEV-H7.0 with "management_server=$RHEV-M_IP" parameter, rhevh can not register to rhevm3.5.

Version-Release number of selected component (if applicable):
rhev-hypervisor7-7.0-20141006.0.el7ev.noarch
ovirt-node-3.1.0-0.20.20141006gitc421e04.el7.noarch
ovirt-node-plugin-vdsm-0.2.0-9.el7.noarch
vdsm-4.16.6-1.el7.x86_64
Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.14.beta.el6ev

How reproducible:
100%

Steps to Reproduce:
1. Auto install RHEV-H with "management_server=$RHEV-M_IP" parameter
2. Check rhevh on rhevm3.5 side
3. Check in rhevh side

Actual results:
1. After step2, rhevh7.0 is not registered on rhevm3.5 side.
2. After step3, the rhevh will display managed by : oVirt Engine http://10.66.110.5. But network is still em1.
   # cat /etc/default/ovirt
     MANAGED_BY="oVirt Engine http://10.66.110.5"
     OVIRT_MANAGEMENT_SERVER="10.66.110.5"

Expected results:
1. After step2, rhevh should list on rhevm3.5 side.

Additional info:

Comment 1 wanghui 2014-10-15 08:07:56 UTC
Created attachment 947131 [details]
screen shot of status page

Comment 2 wanghui 2014-10-15 08:08:39 UTC
Created attachment 947133 [details]
screen shot of rhevm page

Comment 3 Douglas Schilling Landgraf 2014-10-16 13:26:36 UTC
Hi wanghui,

vdsm-reg.log is empty (related to BZ#1150238) and I cannot see any data from vdsm-reg. Could you please share /etc/vdsm-reg/vdsm-reg.conf?

Comment 6 cshao 2014-11-10 08:32:02 UTC
This bug can be reproduced on 
RHEV-H 6.6 for RHEV 3.5 (rhev-hypervisor6-6.6-20141107.0.iso)
ovirt-node-3.1.0-0.25.20141107gitf6dc7b9.el6.noarch
vdsm-4.16.7.3-1.el6ev.x86_64
ovirt-node-plugin-vdsm-0.2.0-11.el6ev.noarch
rhevm-3.5.0-0.19.beta.el6ev.noarch

Comment 7 cshao 2014-11-10 15:44:59 UTC
This bug can be reproduced on RHEV-H 7.0 for RHEV 3.5 
(rhev-hypervisor7-7.0-20141107.0)
ovirt-node-3.1.0-0.25.20141107gitf6dc7b9.el7.noarch
vdsm-4.16.7.3-1.el7ev.x86_64
ovirt-node-plugin-vdsm-0.2.0-10.el7ev.noarch
rhevm-3.5.0-0.19.beta.el6ev.noarch

Comment 8 Fabian Deutsch 2014-11-18 15:45:04 UTC
It might have been a single time. I can not reproduce it anymore.

Comment 9 Douglas Schilling Landgraf 2014-11-19 04:44:21 UTC
(In reply to Fabian Deutsch from comment #8)
> It might have been a single time. I can not reproduce it anymore.

From my findings, the vdsm-reg is disabled for auto-start services.

ovirt-node autoinstall flags used:
==========================================
firstboot storage_init=/dev/sda adminpw=RHhwCLrQXB8zE management_server=192.168.100.185 BOOTIF=ens3

# systemctl list-unit-files --type=service | grep -i vdsm
supervdsmd.service                     static  
vdsm-reg.service                       disabled
vdsmd.service                          enabled 

In vdsm.spec I see:

%post reg
%if ! 0%{?with_systemd}
if [ "$1" -eq 1 ] ; then
    /sbin/chkconfig --add vdsm-reg
fi
%else
%if 0%{?with_systemd}
%systemd_post vdsm-reg.service

When used the %systemd_post macro it didn't enable the service for the next reboot. However, if I use instead a non macro it enable the vdsm-reg:

    /bin/systemctl enable vdsm-reg.service >/dev/null 2>&1 || :
    /bin/systemctl daemon-reload >/dev/null 2>&1 || :

@Fabian, I am using current ovirt-node tree (master) and vdsm (master) on local rhevh7 build and reproduced with the above autoinstall flags, please let me know if it's similar environment or if you have any suggestion.

Comment 10 Fabian Deutsch 2014-11-19 08:41:33 UTC
(In reply to Fabian Deutsch from comment #8)
> It might have been a single time. I can not reproduce it anymore.

Ignore this comment. I wrote this in the wrong bug.

Comment 11 Fabian Deutsch 2014-11-19 13:26:40 UTC
(In reply to Douglas Schilling Landgraf from comment #9)
> (In reply to Fabian Deutsch from comment #8)
> > It might have been a single time. I can not reproduce it anymore.
> 
> From my findings, the vdsm-reg is disabled for auto-start services.> %systemd_post vdsm-reg.service
>
> When used the %systemd_post macro it didn't enable the service for the next
> reboot. However, if I use instead a non macro it enable the vdsm-reg:

This probably requires that vdsm-reg is namend in some preset file.

> 
>     /bin/systemctl enable vdsm-reg.service >/dev/null 2>&1 || :
>     /bin/systemctl daemon-reload >/dev/null 2>&1 || :
>
> @Fabian, I am using current ovirt-node tree (master) and vdsm (master) on
> local rhevh7 build and reproduced with the above autoinstall flags, please
> let me know if it's similar environment or if you have any suggestion.

We can go with the way above on Node.

Comment 12 Douglas Schilling Landgraf 2014-11-19 17:48:53 UTC
(In reply to Fabian Deutsch from comment #11)
> (In reply to Douglas Schilling Landgraf from comment #9)
> > (In reply to Fabian Deutsch from comment #8)
> > > It might have been a single time. I can not reproduce it anymore.
> > 
> > From my findings, the vdsm-reg is disabled for auto-start services.
> 
> …
> 
> > %systemd_post vdsm-reg.service
> >
> > When used the %systemd_post macro it didn't enable the service for the next
> > reboot. However, if I use instead a non macro it enable the vdsm-reg:
> 
> This probably requires that vdsm-reg is namend in some preset file.
> 
> > 
> >     /bin/systemctl enable vdsm-reg.service >/dev/null 2>&1 || :
> >     /bin/systemctl daemon-reload >/dev/null 2>&1 || :
> >
> > @Fabian, I am using current ovirt-node tree (master) and vdsm (master) on
> > local rhevh7 build and reproduced with the above autoinstall flags, please
> > let me know if it's similar environment or if you have any suggestion.
> 
> We can go with the way above on Node.

I have added the systemctl enable and chkconfig add via ovirt-node-plugin-vdsm and it made the work. However, vdsm-reg still not registering via auto-registration (although manually it works)

/bin/systemctl status  -l vdsm-reg.service
vdsm-reg.service - Virtual Desktop Server Registration
   Loaded: loaded (/usr/lib/systemd/system/vdsm-reg.service; enabled)
   Active: failed (Result: exit-code) since Wed 2014-11-19 17:25:18 UTC; 17min ago
  Process: 1866 ExecStop=/lib/systemd/systemd-vdsm-reg stop (code=exited, status=0/SUCCESS)
  Process: 1080 ExecStart=/lib/systemd/systemd-vdsm-reg start (code=exited, status=0/SUCCESS)
 Main PID: 1863 (code=exited, status=1/FAILURE)
   CGroup: /system.slice/vdsm-reg.service

Nov 19 17:25:02 localhost systemd[1]: Starting Virtual Desktop Server Registration...
Nov 19 17:25:02 localhost systemd-vdsm-reg[1080]: vdsm-reg: starting
Nov 19 17:25:02 localhost systemd-vdsm-reg[1080]: Starting up vdsm-reg daemon:
Nov 19 17:25:17 localhost systemd-vdsm-reg[1080]: vdsm-reg start[  OK  ]
Nov 19 17:25:17 localhost systemd-vdsm-reg[1080]: vdsm-reg: ended.
Nov 19 17:25:17 localhost systemd[1]: Started Virtual Desktop Server Registration.
Nov 19 17:25:18 localhost systemd[1]: vdsm-reg.service: main process exited, code=exited, status=1/FAILURE
Nov 19 17:25:18 localhost systemd[1]: Unit vdsm-reg.service entered failed state.

From vdsm-reg log
============================
<snip>
MainThread::DEBUG::2014-11-19 17:11:00,848::vdsm-reg-setup::83::root::validate start
MainThread::DEBUG::2014-11-19 17:11:00,849::vdsm-reg-setup::93::root::validate end. return: False
MainThread::INFO::2014-11-19 17:25:17,242::vdsm-reg-setup::390::vdsRegistrator::After daemonize - My pid is 1863
MainThread::DEBUG::2014-11-19 17:25:17,244::vdsm-reg-setup::44::root::__init__ begin.
MainThread::DEBUG::2014-11-19 17:25:17,947::deployUtil::444::root::_getMGTIface: read host name: 192.168.100.185
MainThread::DEBUG::2014-11-19 17:25:17,948::deployUtil::452::root::_getMGTIface: using host name 192.168.100.185 strIP= 192.168.100.185
MainThread::DEBUG::2014-11-19 17:25:17,950::deployUtil::459::root::_getMGTIface IP=192.168.100.185 strIface=None
</snip>

vdsm, ovirt-node, ovirt-node-plugin-vdsm build from master tree.

Comment 13 Douglas Schilling Landgraf 2014-11-19 17:50:33 UTC
Created attachment 959103 [details]
vdsm_reg_log

Comment 14 Douglas Schilling Landgraf 2014-11-20 13:45:30 UTC
Hi, 

I am moving to POST as I have re-tested under 3.5 branch and vdsm-reg is working as expected. So the patch missing at moment is the one I have attached into the bug on ovirt-node-plugin-vdsm. The master branch probably at moment is far away from 3.5 and all issues found on the road will be handle on time for next release.

Comment 15 Douglas Schilling Landgraf 2014-11-24 14:57:00 UTC
I cannot reproduce it with rhev-hypervisor6-6.6-20141119.0. I do believe from my findings this report is only valid in EL7. If it's not the case, please let me know.

Flags used:

firstboot storage_init=/dev/sda adminpw=RHhwCLrQXB8zE management_server=192.168.122.70 BOOTIF=link

RHEVM: 3.5.0-0.21.el6ev

Comment 16 Douglas Schilling Landgraf 2014-11-24 19:53:32 UTC
Hi Fabian, 

Could you please review the devel flag on this bug?

Thanks

Comment 17 Ying Cui 2014-11-25 02:05:25 UTC
Let me help to qa_ack+ this bug on Virt QE side.

Comment 18 Ying Cui 2014-11-25 02:07:34 UTC
Hui, could you please check comment 7 and comment 15 with latest 6.6_3.5 build(rhev-hypervisor6-6.6-20141119.0) to check whether this bug exist or not currently? Thanks.

Comment 19 Ying Cui 2014-11-25 02:08:47 UTC
(In reply to Ying Cui from comment #18)
> Hui, could you please check comment 7 and comment 15 with latest 6.6_3.5
> build(rhev-hypervisor6-6.6-20141119.0) to check whether this bug exist or
> not currently? Thanks.

here should be comment 6 and comment 15

Comment 20 wanghui 2014-11-25 04:51:15 UTC
Test version:
rhev-hypervisor6-6.6-20141119.0
ovirt-node-3.1.0-0.27.20141119git24e087e.el6.noarch
ovirt-node-plugin-vdsm-0.2.0-12.el6ev.noarch
Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.21.el6ev

Test step:
1. Auto install RHEV-H with "management_server=$RHEV-M_IP" parameter
2. Check in RHEVM side

Test result:
1. After step2, rhevh can be registered to rhevm3.5 and can be up correctly.

So this issue is fixed in rhev-hypervisor6-6.6-20141119.0 now.

Comment 22 Douglas Schilling Landgraf 2014-11-25 19:46:26 UTC
Hello Bronce,

Could you please review the pm flag? 

Thanks!

Comment 24 Julie 2014-11-26 08:28:43 UTC
Doc text is added per engineering request. Please update the doc text for GA or simply set the 'requires_release_note' flag to - and the tool we use would exclude it from the GA release notes.

Cheers,
Julie

Comment 25 Fabian Deutsch 2014-12-04 08:30:40 UTC
Clearing the doctext flags, because the bug is going to be fixed for GA.

Comment 27 wanghui 2014-12-11 08:13:42 UTC
Test version:
rhevh-7.0-20141119.0.el7ev.iso
ovirt-node-3.1.0-0.27.20141119git24e087e.el7.noarch
ovirt-node-plugin-vdsm-0.2.0-12.el7ev.noarch
vdsm-4.16.7.4-1.el7ev.x86_64
Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.23.beta.el6ev

Test step:
1. Auto-install rhevh-7.0-20141119.0.el7ev.iso with follow parameters.
   BOOTIF=em1 storage_init=/dev/sda amdinpw=4DHc2Jl0D05xk firstboot management_server=10.66.110.5

Test result:
1. After step2, rhevh7.0 is not registered on rhevm3.5 side.
2. After step3, the rhevh will display managed by : oVirt Engine http://10.66.110.5. But network is still em1.
   # cat /etc/default/ovirt
     MANAGED_BY="oVirt Engine http://10.66.110.5"
     OVIRT_MANAGEMENT_SERVER="10.66.110.5"

So this issue if not fixed in rhevh-7.0-20141119.0.el7ev.iso. Change the status from ON_QA to Assigned.

Comment 28 Fabian Deutsch 2014-12-11 08:16:44 UTC
This bug needs to be verified with RHEV-H later than 1204

Comment 29 wanghui 2014-12-17 06:03:20 UTC
Test version:
rhev-hypervisor7-7.0-20141212.0.iso
ovirt-node-3.1.0-0.34.20141210git0c9c493.el7.noarch
ovirt-node-plugin-vdsm-0.2.0-14.el7ev.noarch
vdsm-4.16.8.1-3.el7ev.x86_64
Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.25.beta.el6ev

Test step:
1. Auto-install rhev-hypervisor7-7.0-20141212.0.iso with follow parameters.
   BOOTIF=em1 storage_init=/dev/sda amdinpw=4DHc2Jl0D05xk firstboot management_server=10.66.110.5

Test result:
1. After step1, rhevh can be registered to rhevm3.5 and can be up correctly.

So this issue is fixed in rhev-hypervisor7-7.0-20141212.0.iso now. Change the status from ON_QA to Verified.

Comment 31 Fabian Deutsch 2015-02-12 14:02:15 UTC
RHEV 3.5.0 has been released. I am closing this bug, because it has been VERIFIED.


Note You need to log in before you can comment on or make changes to this bug.