Bug 855922 - vdsm: vm's fail to migrate from host with vdsm-4.9-113.3 to host with vdsm-4.9.6-32 due to "can't set attribute" error (cluster level still 3.0)
vdsm: vm's fail to migrate from host with vdsm-4.9-113.3 to host with vdsm-4....
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
x86_64 Linux
urgent Severity urgent
: rc
: ---
Assigned To: Saveliev Peter
: Regression, ZStream
Depends On:
  Show dependency treegraph
Reported: 2012-09-10 11:36 EDT by Dafna Ron
Modified: 2014-01-12 19:54 EST (History)
14 users (show)

See Also:
Fixed In Version: vdsm-4.9.6-37.0
Doc Type: Bug Fix
Doc Text:
Previously, when attempting to upgrade the VDSM from 3.0 to 3.1, migrating virtual machines from 3.0 hosts failed with an AttributeError. This has been corrected so that virtual machines can be migrated from hosts running VDSM 3.0 to hosts running VDSM 3.1.
Story Points: ---
Clone Of:
Last Closed: 2012-12-04 14:11:24 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
logs (1.41 MB, application/x-gzip)
2012-09-10 11:36 EDT, Dafna Ron
no flags Details
vdsm-37 migration failure (631.57 KB, application/x-xz)
2012-10-15 12:31 EDT, Yaniv Kaul
no flags Details

  None (edit)
Description Dafna Ron 2012-09-10 11:36:43 EDT
Created attachment 611474 [details]

Description of problem:

I'm testing upgrade from 3.0 to 3.1 
after upgrading vdsm on host I tried migrating the vm's from the host with 3.0 vdsm to the host installed with 3.1 vdsm and we are failing with AttributeError. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create a 3.0 DC+Cluster with running vm's
2. migrate or run all vm's on spm
3. put hsm in maintenance
4. yum update vdsm to  vdsm-4.9.6-32 + vdsm restart
5. activate the host and try to migrate the vm's  
Actual results:

we are getting AttributeError

Expected results:

we should be able to migrate the vm's

Additional info: vdsm log
Comment 2 Haim 2012-09-10 12:02:04 EDT
we added some debug prints inside the code in order to understand root cause and it appears as if drives list during migration sent by the older vdsm doesn't contain _blockDev (set to None instead of False). 

Thread-14::DEBUG::2012-09-10 18:03:20,523::vm::60::vm.Vm::(__init__) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::### args = {'poolID': '854f5121-8550-4582-bb73-fce7d57a569c', 'needExtend': False, 'if': 'ide', 'in
dex': '0', 'iface': 'ide', 'apparentsize': '1073741824', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005', 'readonly': 'False', 'shared': False, 'truesize': '1073741824', 'type': 'disk', 'domainID': '8d366beb-41f
e-4a57-b0dd-4b5e1c5cfd76', 'reqsize': '0', 'format': 'cow', 'address': {u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'}, 'device': 'disk', 'path': '/rhev/data-center/854f5121-
8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'propagateErrors': 'off', 'name': u'hda', 'volumeID': 'cf70ebdc-b
d1c-4738-afaf-2908768ddeec', 'alias': u'ide0-0-0', 'volumeChain': [{'path': '/rhev/data-center/854f5121-8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/
e886fb6e-f91f-46ae-a488-0f24e47f5731', 'domainID': '8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76', 'volumeID': 'e886fb6e-f91f-46ae-a488-0f24e47f5731', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005'}, {'path': '/rhev/da
ta-center/854f5121-8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'domainID': '8d366beb-41fe-4a57-b0dd-4b5e1c5cf
d76', 'volumeID': 'cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005'}]}
Thread-14::INFO::2012-09-10 18:03:20,524::vm::584::vm.Vm::(_startUnderlyingVm) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::Skipping errors on recovery
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 577, in _startUnderlyingVm
  File "/usr/share/vdsm/libvirtvm.py", line 1358, in _run
    self._devices[devType].append(devClass(self.conf, self.log, **dev))
  File "/usr/share/vdsm/libvirtvm.py", line 996, in __init__
    LibvirtVmDevice.__init__(self, conf, log, **kwargs)
  File "/usr/share/vdsm/vm.py", line 62, in __init__
    log.debug('### obj =  %s , attr = %s, value = %s', self,attr,value)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1305, in debug
    self.logger.debug(msg, *args, **kwargs)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1044, in debug
    self._log(DEBUG, msg, args, **kwargs)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
  File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
  File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in callHandlers
  File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
  File "/usr/lib64/python2.6/logging/handlers.py", line 776, in emit
    msg = self.format(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
    return fmt.format(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
    record.message = record.getMessage()
  File "/usr/lib64/python2.6/logging/__init__.py", line 306, in getMessage
    msg = msg % self.args
  File "/usr/share/vdsm/vm.py", line 69, in __str__
    if not a.startswith('__')]
  File "/usr/share/vdsm/libvirtvm.py", line 1012, in blockDev
    if self._blockDev is None:
AttributeError: 'Drive' object has no attribute '_blockDev'
Comment 4 Gal Hammer 2012-09-12 10:55:31 EDT
Dafna, can you please see if this patch fix the problem (replacing None with an empty string)?

$ git diff
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
index 3306cc2..3e793b6 100644
--- a/vdsm/libvirtvm.py
+++ b/vdsm/libvirtvm.py
@@ -1014,13 +1014,13 @@ class Drive(LibvirtVmDevice):
         if self.device in ("cdrom", "floppy"):
             self._blockDev = False
-            self._blockDev = None
+            self._blockDev = ""


     def blockDev(self):
-        if self._blockDev is None:
+        if self._blockDev is "":
                 self._blockDev = utils.isBlockDevice(self.path)
Comment 7 Dafna Ron 2012-09-19 04:14:11 EDT
Created attachment 614243 [details]
log and lkibvirtvm.py with change

tested the change as shown in diff of comment 4 and the vm's still failed migrate on attr. 

attaching logs and libvirtvm.py that was changed.
Comment 8 Saveliev Peter 2012-09-25 03:20:41 EDT
In the case VDSM is trying to set read-only property blockDev of the Drive class. We have two ways to fix it:

http://gerrit.ovirt.org/#/c/8185/ — make the property read-write
http://gerrit.ovirt.org/#/c/8186/ — skip all read-only properties
Comment 9 Saveliev Peter 2012-09-26 08:06:43 EDT
The second way has been chosen: http://gerrit.ovirt.org/#/c/8186/
Comment 11 Michal Skrivanek 2012-10-01 05:15:07 EDT
merged u/s as 40a9289e92ebc05ec3b3a42b4ac650f12d5ba5b2
Comment 14 Yaniv Kaul 2012-10-15 12:27:44 EDT
vdsm -37: almost (perhaps a separate bug?)
Thread-202164::INFO::2012-09-27 17:05:54,058::libvirtvm::367::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Migration Progress: 60 seconds
 elapsed, 88% of data processed, 88% of mem processed
Thread-202163::DEBUG::2012-09-27 17:05:54,063::libvirtvm::316::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::setting migration downtime to
libvirtEventLoop::DEBUG::2012-09-27 17:05:56,294::libvirtvm::1401::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::even
t Suspended detail 0 opaque None
Thread-202160::DEBUG::2012-09-27 17:05:59,352::libvirtvm::322::vm.Vm::(cancel) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::canceling migration downti
me thread
Thread-298::ERROR::2012-09-27 17:05:59,353::utils::377::vm.Vm::(collect) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stats function failed: <Advanced
StatsFunction _sampleDiskLatency at 0x1509070>
Traceback (most recent call last):
  File "/usr/share/vdsm/utils.py", line 373, in collect
  File "/usr/share/vdsm/utils.py", line 250, in __call__
    retValue = self._function(*args, **kwargs)
  File "/usr/share/vdsm/libvirtvm.py", line 142, in _sampleDiskLatency
  File "/usr/lib64/python2.6/site-packages/libvirt_qemu.py", line 37, in qemuMonitorCommand
    if ret is None: raise libvirt.libvirtError('virDomainQemuMonitorCommand() failed')
libvirtError: Requested operation is not valid: domain is not running
libvirtEventLoop::DEBUG::2012-09-27 17:05:59,353::libvirtvm::1401::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::even
t Stopped detail 3 opaque None
Thread-202160::DEBUG::2012-09-27 17:05:59,353::libvirtvm::372::vm.Vm::(stop) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::stopping migration monitor thread
Thread-202163::DEBUG::2012-09-27 17:05:59,354::libvirtvm::319::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::migration downtime thread exiting
Thread-202160::DEBUG::2012-09-27 17:05:59,384::vm::742::vm.Vm::(setDownStatus) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Changed state to Down: Migration succeeded
Thread-202160::DEBUG::2012-09-27 17:05:59,384::utils::314::vm.Vm::(stop) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stop statistics collection
Thread-298::DEBUG::2012-09-27 17:05:59,385::utils::345::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stats thread finished
Comment 15 Yaniv Kaul 2012-10-15 12:30:26 EDT
For above:
[root@master-vds1 ~]# rpm -qa |grep -E "vdsm|libvirt" |sort
Comment 16 Yaniv Kaul 2012-10-15 12:31:53 EDT
Created attachment 627554 [details]
vdsm-37 migration failure
Comment 17 Yaniv Kaul 2012-10-16 03:57:23 EDT
Comment on attachment 627554 [details]
vdsm-37 migration failure

Disregard - it looks like the VDSM that caused the failure was still -36, not -37.
Will verify again.
Comment 18 Yaniv Kaul 2012-10-16 05:26:24 EDT
OK, re-verified - works OK with vdsm-37 (migrated from source running -112).
Comment 21 errata-xmlrpc 2012-12-04 14:11:24 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.