Bug 1154088 - [rhel7] Failed to power off a VM after it was migrated from RHEL6.6 host to RHEL7.0 host
Summary: [rhel7] Failed to power off a VM after it was migrated from RHEL6.6 host to R...
Keywords:
Status: CLOSED DUPLICATE of bug 1152973
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Francesco Romani
QA Contact: meital avital
URL:
Whiteboard: virt
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-17 13:37 UTC by Jiri Belka
Modified: 2014-11-03 12:42 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-20 11:42:32 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
sosreport-LogCollector-20141017153902.tar.xz (12.71 MB, application/x-xz)
2014-10-17 13:59 UTC, Jiri Belka
no flags Details
sosreport logs from rhel70 host (8.99 MB, application/x-xz)
2014-10-17 14:03 UTC, Jiri Belka
no flags Details
rhel70 vdsm.log (2.69 MB, text/plain)
2014-10-17 14:04 UTC, Jiri Belka
no flags Details

Description Jiri Belka 2014-10-17 13:37:29 UTC
Description of problem:
A VM migrated from RHEL6.6 host to RHEL7.0 host can't be powered off.

1. rhel66 poweroff ok
2. rhel70 poweroff ok
3. rhel66->rhel7 migrated, poweroff ko

~~~
Thread-314::ERROR::2014-10-17 15:16:11,047::__init__::491::jsonrpc.JsonRpcServer::(_serveRequest) Internal server error
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 486, in _serveRequest
    res = method(**params)
  File "/usr/share/vdsm/rpc/Bridge.py", line 266, in _dynamicMethod
    result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 338, in destroy
    res = v.destroy()
  File "/usr/share/vdsm/virt/vm.py", line 4921, in destroy
    response = self.doDestroy()
  File "/usr/share/vdsm/virt/vm.py", line 4938, in doDestroy
    return self.releaseVm()
  File "/usr/share/vdsm/virt/vm.py", line 4839, in releaseVm
    supervdsm.getProxy().removeFs(drive.path)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in removeFs
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
    raise convert_to_error(kind, result)
Exception: Cannot remove Fs that does not exists in: /var/run/vdsm/payload
~~~

plus journalctl -u libvirtd --no-paged

~~~
Oct 17 15:15:30 dell-r210ii-04.rhev.lab.eng.brq.redhat.com libvirtd[18347]: Failed to open file '/var/run/vdsm/payload/fbec56ea-3f03-4f0b-a33e-598e24725350.b77126d9f4...irectory
Oct 17 15:15:31 dell-r210ii-04.rhev.lab.eng.brq.redhat.com libvirtd[18347]: This thread seems to be the async job owner; entering monitor without asking for a nested ...angerous
~~~

4. rhel70 -> rhel66 migration-blocked - migration doesn't work

Version-Release number of selected component (if applicable):
vdsm-4.16.7-1.el7.x86_64
libvirt-daemon-1.1.1-29.el7_0.3.x86_64
qemu-kvm-rhev-1.5.3-60.el7_0.10.x86_64

How reproducible:
100%

Steps to Reproduce:
1. have rhel66 and rhel70 hosts
2. create __NEW__ vm and start it on rhel66 host
3. migrate this VM to rhel70 host
4. poweroff the VM

Actual results:
failed to power off the VM

Expected results:
should work

Additional info:
see above that "static" (ie. powering off not migrated VMs work ok on rhel70 host)

important: when the VM is started on rhel70, then powered off, then started on rhel66 and migrated successfully to rhel70 host and then powered off,... it works! thyus it is '__NEW__' in 2nd steps to reproduce this issue

Comment 1 Jiri Belka 2014-10-17 13:59:24 UTC
Created attachment 947897 [details]
sosreport-LogCollector-20141017153902.tar.xz

INFO: Gathering information from selected hypervisors...
INFO: collecting information from 10.34.63.222
INFO: collecting information from 10.34.63.223
ERROR: Failed to collect logs from: 10.34.63.223; Could not parse sosreport output to determine filenam
^^^ ooops :)

Comment 2 Jiri Belka 2014-10-17 14:03:40 UTC
Created attachment 947899 [details]
sosreport logs from rhel70 host

sosreport logs from rhel70 host

Comment 3 Jiri Belka 2014-10-17 14:04:41 UTC
Created attachment 947900 [details]
rhel70 vdsm.log

rhel70 vdsm.log as not all is in journalctl

Comment 4 Omer Frenkel 2014-10-19 08:47:35 UTC
might be duplicate of bug 1152973

Comment 5 Francesco Romani 2014-10-20 11:42:32 UTC
(In reply to Omer Frenkel from comment #4)
> might be duplicate of bug 1152973

It is.
the key point is

2. create __NEW__ vm and start it on rhel66 host

because this implies the usage of cloud-init which causes all the pain in migrations and all the failure reported.
If you do not use cloud-init, migration will work like a charm.

If you use cloud-init also with existing VMs, will fail as reported.

*** This bug has been marked as a duplicate of bug 1152973 ***


Note You need to log in before you can comment on or make changes to this bug.