Created attachment 947130 [details] log files Description of problem: After auto install RHEV-H7.0 with "management_server=$RHEV-M_IP" parameter, rhevh can not register to rhevm3.5. Version-Release number of selected component (if applicable): rhev-hypervisor7-7.0-20141006.0.el7ev.noarch ovirt-node-3.1.0-0.20.20141006gitc421e04.el7.noarch ovirt-node-plugin-vdsm-0.2.0-9.el7.noarch vdsm-4.16.6-1.el7.x86_64 Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.14.beta.el6ev How reproducible: 100% Steps to Reproduce: 1. Auto install RHEV-H with "management_server=$RHEV-M_IP" parameter 2. Check rhevh on rhevm3.5 side 3. Check in rhevh side Actual results: 1. After step2, rhevh7.0 is not registered on rhevm3.5 side. 2. After step3, the rhevh will display managed by : oVirt Engine http://10.66.110.5. But network is still em1. # cat /etc/default/ovirt MANAGED_BY="oVirt Engine http://10.66.110.5" OVIRT_MANAGEMENT_SERVER="10.66.110.5" Expected results: 1. After step2, rhevh should list on rhevm3.5 side. Additional info:
Created attachment 947131 [details] screen shot of status page
Created attachment 947133 [details] screen shot of rhevm page
Hi wanghui, vdsm-reg.log is empty (related to BZ#1150238) and I cannot see any data from vdsm-reg. Could you please share /etc/vdsm-reg/vdsm-reg.conf?
This bug can be reproduced on RHEV-H 6.6 for RHEV 3.5 (rhev-hypervisor6-6.6-20141107.0.iso) ovirt-node-3.1.0-0.25.20141107gitf6dc7b9.el6.noarch vdsm-4.16.7.3-1.el6ev.x86_64 ovirt-node-plugin-vdsm-0.2.0-11.el6ev.noarch rhevm-3.5.0-0.19.beta.el6ev.noarch
This bug can be reproduced on RHEV-H 7.0 for RHEV 3.5 (rhev-hypervisor7-7.0-20141107.0) ovirt-node-3.1.0-0.25.20141107gitf6dc7b9.el7.noarch vdsm-4.16.7.3-1.el7ev.x86_64 ovirt-node-plugin-vdsm-0.2.0-10.el7ev.noarch rhevm-3.5.0-0.19.beta.el6ev.noarch
It might have been a single time. I can not reproduce it anymore.
(In reply to Fabian Deutsch from comment #8) > It might have been a single time. I can not reproduce it anymore. From my findings, the vdsm-reg is disabled for auto-start services. ovirt-node autoinstall flags used: ========================================== firstboot storage_init=/dev/sda adminpw=RHhwCLrQXB8zE management_server=192.168.100.185 BOOTIF=ens3 # systemctl list-unit-files --type=service | grep -i vdsm supervdsmd.service static vdsm-reg.service disabled vdsmd.service enabled In vdsm.spec I see: %post reg %if ! 0%{?with_systemd} if [ "$1" -eq 1 ] ; then /sbin/chkconfig --add vdsm-reg fi %else %if 0%{?with_systemd} %systemd_post vdsm-reg.service When used the %systemd_post macro it didn't enable the service for the next reboot. However, if I use instead a non macro it enable the vdsm-reg: /bin/systemctl enable vdsm-reg.service >/dev/null 2>&1 || : /bin/systemctl daemon-reload >/dev/null 2>&1 || : @Fabian, I am using current ovirt-node tree (master) and vdsm (master) on local rhevh7 build and reproduced with the above autoinstall flags, please let me know if it's similar environment or if you have any suggestion.
(In reply to Fabian Deutsch from comment #8) > It might have been a single time. I can not reproduce it anymore. Ignore this comment. I wrote this in the wrong bug.
(In reply to Douglas Schilling Landgraf from comment #9) > (In reply to Fabian Deutsch from comment #8) > > It might have been a single time. I can not reproduce it anymore. > > From my findings, the vdsm-reg is disabled for auto-start services. … > %systemd_post vdsm-reg.service > > When used the %systemd_post macro it didn't enable the service for the next > reboot. However, if I use instead a non macro it enable the vdsm-reg: This probably requires that vdsm-reg is namend in some preset file. > > /bin/systemctl enable vdsm-reg.service >/dev/null 2>&1 || : > /bin/systemctl daemon-reload >/dev/null 2>&1 || : > > @Fabian, I am using current ovirt-node tree (master) and vdsm (master) on > local rhevh7 build and reproduced with the above autoinstall flags, please > let me know if it's similar environment or if you have any suggestion. We can go with the way above on Node.
(In reply to Fabian Deutsch from comment #11) > (In reply to Douglas Schilling Landgraf from comment #9) > > (In reply to Fabian Deutsch from comment #8) > > > It might have been a single time. I can not reproduce it anymore. > > > > From my findings, the vdsm-reg is disabled for auto-start services. > > … > > > %systemd_post vdsm-reg.service > > > > When used the %systemd_post macro it didn't enable the service for the next > > reboot. However, if I use instead a non macro it enable the vdsm-reg: > > This probably requires that vdsm-reg is namend in some preset file. > > > > > /bin/systemctl enable vdsm-reg.service >/dev/null 2>&1 || : > > /bin/systemctl daemon-reload >/dev/null 2>&1 || : > > > > @Fabian, I am using current ovirt-node tree (master) and vdsm (master) on > > local rhevh7 build and reproduced with the above autoinstall flags, please > > let me know if it's similar environment or if you have any suggestion. > > We can go with the way above on Node. I have added the systemctl enable and chkconfig add via ovirt-node-plugin-vdsm and it made the work. However, vdsm-reg still not registering via auto-registration (although manually it works) /bin/systemctl status -l vdsm-reg.service vdsm-reg.service - Virtual Desktop Server Registration Loaded: loaded (/usr/lib/systemd/system/vdsm-reg.service; enabled) Active: failed (Result: exit-code) since Wed 2014-11-19 17:25:18 UTC; 17min ago Process: 1866 ExecStop=/lib/systemd/systemd-vdsm-reg stop (code=exited, status=0/SUCCESS) Process: 1080 ExecStart=/lib/systemd/systemd-vdsm-reg start (code=exited, status=0/SUCCESS) Main PID: 1863 (code=exited, status=1/FAILURE) CGroup: /system.slice/vdsm-reg.service Nov 19 17:25:02 localhost systemd[1]: Starting Virtual Desktop Server Registration... Nov 19 17:25:02 localhost systemd-vdsm-reg[1080]: vdsm-reg: starting Nov 19 17:25:02 localhost systemd-vdsm-reg[1080]: Starting up vdsm-reg daemon: Nov 19 17:25:17 localhost systemd-vdsm-reg[1080]: vdsm-reg start[ OK ] Nov 19 17:25:17 localhost systemd-vdsm-reg[1080]: vdsm-reg: ended. Nov 19 17:25:17 localhost systemd[1]: Started Virtual Desktop Server Registration. Nov 19 17:25:18 localhost systemd[1]: vdsm-reg.service: main process exited, code=exited, status=1/FAILURE Nov 19 17:25:18 localhost systemd[1]: Unit vdsm-reg.service entered failed state. From vdsm-reg log ============================ <snip> MainThread::DEBUG::2014-11-19 17:11:00,848::vdsm-reg-setup::83::root::validate start MainThread::DEBUG::2014-11-19 17:11:00,849::vdsm-reg-setup::93::root::validate end. return: False MainThread::INFO::2014-11-19 17:25:17,242::vdsm-reg-setup::390::vdsRegistrator::After daemonize - My pid is 1863 MainThread::DEBUG::2014-11-19 17:25:17,244::vdsm-reg-setup::44::root::__init__ begin. MainThread::DEBUG::2014-11-19 17:25:17,947::deployUtil::444::root::_getMGTIface: read host name: 192.168.100.185 MainThread::DEBUG::2014-11-19 17:25:17,948::deployUtil::452::root::_getMGTIface: using host name 192.168.100.185 strIP= 192.168.100.185 MainThread::DEBUG::2014-11-19 17:25:17,950::deployUtil::459::root::_getMGTIface IP=192.168.100.185 strIface=None </snip> vdsm, ovirt-node, ovirt-node-plugin-vdsm build from master tree.
Created attachment 959103 [details] vdsm_reg_log
Hi, I am moving to POST as I have re-tested under 3.5 branch and vdsm-reg is working as expected. So the patch missing at moment is the one I have attached into the bug on ovirt-node-plugin-vdsm. The master branch probably at moment is far away from 3.5 and all issues found on the road will be handle on time for next release.
I cannot reproduce it with rhev-hypervisor6-6.6-20141119.0. I do believe from my findings this report is only valid in EL7. If it's not the case, please let me know. Flags used: firstboot storage_init=/dev/sda adminpw=RHhwCLrQXB8zE management_server=192.168.122.70 BOOTIF=link RHEVM: 3.5.0-0.21.el6ev
Hi Fabian, Could you please review the devel flag on this bug? Thanks
Let me help to qa_ack+ this bug on Virt QE side.
Hui, could you please check comment 7 and comment 15 with latest 6.6_3.5 build(rhev-hypervisor6-6.6-20141119.0) to check whether this bug exist or not currently? Thanks.
(In reply to Ying Cui from comment #18) > Hui, could you please check comment 7 and comment 15 with latest 6.6_3.5 > build(rhev-hypervisor6-6.6-20141119.0) to check whether this bug exist or > not currently? Thanks. here should be comment 6 and comment 15
Test version: rhev-hypervisor6-6.6-20141119.0 ovirt-node-3.1.0-0.27.20141119git24e087e.el6.noarch ovirt-node-plugin-vdsm-0.2.0-12.el6ev.noarch Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.21.el6ev Test step: 1. Auto install RHEV-H with "management_server=$RHEV-M_IP" parameter 2. Check in RHEVM side Test result: 1. After step2, rhevh can be registered to rhevm3.5 and can be up correctly. So this issue is fixed in rhev-hypervisor6-6.6-20141119.0 now.
Hello Bronce, Could you please review the pm flag? Thanks!
Doc text is added per engineering request. Please update the doc text for GA or simply set the 'requires_release_note' flag to - and the tool we use would exclude it from the GA release notes. Cheers, Julie
Clearing the doctext flags, because the bug is going to be fixed for GA.
Test version: rhevh-7.0-20141119.0.el7ev.iso ovirt-node-3.1.0-0.27.20141119git24e087e.el7.noarch ovirt-node-plugin-vdsm-0.2.0-12.el7ev.noarch vdsm-4.16.7.4-1.el7ev.x86_64 Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.23.beta.el6ev Test step: 1. Auto-install rhevh-7.0-20141119.0.el7ev.iso with follow parameters. BOOTIF=em1 storage_init=/dev/sda amdinpw=4DHc2Jl0D05xk firstboot management_server=10.66.110.5 Test result: 1. After step2, rhevh7.0 is not registered on rhevm3.5 side. 2. After step3, the rhevh will display managed by : oVirt Engine http://10.66.110.5. But network is still em1. # cat /etc/default/ovirt MANAGED_BY="oVirt Engine http://10.66.110.5" OVIRT_MANAGEMENT_SERVER="10.66.110.5" So this issue if not fixed in rhevh-7.0-20141119.0.el7ev.iso. Change the status from ON_QA to Assigned.
This bug needs to be verified with RHEV-H later than 1204
Test version: rhev-hypervisor7-7.0-20141212.0.iso ovirt-node-3.1.0-0.34.20141210git0c9c493.el7.noarch ovirt-node-plugin-vdsm-0.2.0-14.el7ev.noarch vdsm-4.16.8.1-3.el7ev.x86_64 Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.25.beta.el6ev Test step: 1. Auto-install rhev-hypervisor7-7.0-20141212.0.iso with follow parameters. BOOTIF=em1 storage_init=/dev/sda amdinpw=4DHc2Jl0D05xk firstboot management_server=10.66.110.5 Test result: 1. After step1, rhevh can be registered to rhevm3.5 and can be up correctly. So this issue is fixed in rhev-hypervisor7-7.0-20141212.0.iso now. Change the status from ON_QA to Verified.
RHEV 3.5.0 has been released. I am closing this bug, because it has been VERIFIED.