Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1052841

Summary: Host can't move to maintenance, since it "thinks" it has a VM, after vdsm downgrade.
Product: Red Hat Enterprise Virtualization Manager Reporter: Ilanit Stein <istein>
Component: vdsmAssignee: Vinzenz Feenstra [evilissimo] <vfeenstr>
Status: CLOSED CURRENTRELEASE QA Contact: Ilanit Stein <istein>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.3.0CC: acathrow, bazulay, fromani, gklein, iheim, jbelka, lpeer, mavital, ofrenkel, Rhev-m-bugs, sherold, yeylon
Target Milestone: ---Keywords: Reopened
Target Release: 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-03-31 07:05:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine.log
none
source host logs (called cyan-vdse...)
none
destinasion host (silver-vdsc...) none

Description Ilanit Stein 2014-01-14 07:56:07 UTC
Description of problem:
Flow:
====
I did vdsm downgrade (remove vdsm 3.3, and install vdsm 3.2), while there is one VM, running on the host (downgrade in general is not a supported flow).
While this downgrade, the VM turned into unknown state (expected).
When the downgrade end, I started vdsm service -> The VM migrated to another host. (engine decides, seems correct).

The problem:
===========
The host that had it's vdsm downgraded, can't turn into maintenance,
Though the VM was migrated to another host !
It still "think" it has the VM in down state, in migration process.

Workaround:
===========
vdsm restart resolves the problem, and after that host can tun into maintenance.

Error on engine.log:
=================== 
2014-01-12 16:11:03,988 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-70) [48720200] vm mig_ver running in db and not running in vds - add to rerun treatment. vds cyan-vdse.qa.lab.tlv.redhat.com

Version-Release number of selected component (if applicable):
is30

Additional info:
Please refer to the logs till 2014-01-12 16:20, where vdsm was restarted, and problem resolved.

Comment 1 Ilanit Stein 2014-01-14 07:59:44 UTC
VM name: mig_ver

Comment 2 Ilanit Stein 2014-01-14 08:00:38 UTC
Created attachment 849784 [details]
engine.log

Comment 3 Ilanit Stein 2014-01-14 08:02:18 UTC
Created attachment 849785 [details]
source host logs (called cyan-vdse...)

Comment 4 Ilanit Stein 2014-01-14 08:10:38 UTC
Created attachment 849786 [details]
destinasion host (silver-vdsc...)

Comment 5 Vinzenz Feenstra [evilissimo] 2014-02-04 11:36:32 UTC
Caused by the same issue as BZ#1052097

When creating a VM on vdsm for RHEV 3.3 it's not possible to properly use it in previous versions (e.g. RHEV 3.2)

*** This bug has been marked as a duplicate of bug 1052097 ***

Comment 6 Vinzenz Feenstra [evilissimo] 2014-02-04 12:09:53 UTC
After thinking about it once more and comparing the results in BZ#1052097 I realize it is not a duplicate.

Comment 7 Francesco Romani 2014-03-27 09:04:58 UTC
*** Bug 1080536 has been marked as a duplicate of this bug. ***

Comment 8 Francesco Romani 2014-03-27 09:23:16 UTC
*** Bug 1080536 has been marked as a duplicate of this bug. ***

Comment 9 Vinzenz Feenstra [evilissimo] 2014-03-31 07:05:32 UTC
This issue should NOT be existent in RHEV 3.3 and higher
Previous versions were failing on preparing the vmChannels and causing an exception here which causes this behaviour.

I am closing this bug as current release.

This bug requires to fix RHEV 3.2 in z stream and since there are currently no plans for a new Z-stream and we do not have any customer reports in regards to this issue at the moment it's not considered as required.