Bug 855922
| Summary: | vdsm: vm's fail to migrate from host with vdsm-4.9-113.3 to host with vdsm-4.9.6-32 due to "can't set attribute" error (cluster level still 3.0) | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Dafna Ron <dron> | ||||||
| Component: | vdsm | Assignee: | Saveliev Peter <peet> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | Haim <hateya> | ||||||
| Severity: | urgent | Docs Contact: | |||||||
| Priority: | urgent | ||||||||
| Version: | 6.3 | CC: | abaron, amureini, bazulay, danken, ewarszaw, fsimonce, iheim, ilvovsky, jbiddle, lpeer, michal.skrivanek, peet, yeylon, ykaul | ||||||
| Target Milestone: | rc | Keywords: | Regression, ZStream | ||||||
| Target Release: | --- | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | virt | ||||||||
| Fixed In Version: | vdsm-4.9.6-37.0 | Doc Type: | Bug Fix | ||||||
| Doc Text: |
Previously, when attempting to upgrade the VDSM from 3.0 to 3.1, migrating virtual machines from 3.0 hosts failed with an AttributeError. This has been corrected so that virtual machines can be migrated from hosts running VDSM 3.0 to hosts running VDSM 3.1.
|
Story Points: | --- | ||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2012-12-04 19:11:24 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
we added some debug prints inside the code in order to understand root cause and it appears as if drives list during migration sent by the older vdsm doesn't contain _blockDev (set to None instead of False).
Thread-14::DEBUG::2012-09-10 18:03:20,523::vm::60::vm.Vm::(__init__) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::### args = {'poolID': '854f5121-8550-4582-bb73-fce7d57a569c', 'needExtend': False, 'if': 'ide', 'in
dex': '0', 'iface': 'ide', 'apparentsize': '1073741824', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005', 'readonly': 'False', 'shared': False, 'truesize': '1073741824', 'type': 'disk', 'domainID': '8d366beb-41f
e-4a57-b0dd-4b5e1c5cfd76', 'reqsize': '0', 'format': 'cow', 'address': {u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'}, 'device': 'disk', 'path': '/rhev/data-center/854f5121-
8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'propagateErrors': 'off', 'name': u'hda', 'volumeID': 'cf70ebdc-b
d1c-4738-afaf-2908768ddeec', 'alias': u'ide0-0-0', 'volumeChain': [{'path': '/rhev/data-center/854f5121-8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/
e886fb6e-f91f-46ae-a488-0f24e47f5731', 'domainID': '8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76', 'volumeID': 'e886fb6e-f91f-46ae-a488-0f24e47f5731', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005'}, {'path': '/rhev/da
ta-center/854f5121-8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'domainID': '8d366beb-41fe-4a57-b0dd-4b5e1c5cf
d76', 'volumeID': 'cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005'}]}
Thread-14::INFO::2012-09-10 18:03:20,524::vm::584::vm.Vm::(_startUnderlyingVm) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::Skipping errors on recovery
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 577, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1358, in _run
self._devices[devType].append(devClass(self.conf, self.log, **dev))
File "/usr/share/vdsm/libvirtvm.py", line 996, in __init__
LibvirtVmDevice.__init__(self, conf, log, **kwargs)
File "/usr/share/vdsm/vm.py", line 62, in __init__
log.debug('### obj = %s , attr = %s, value = %s', self,attr,value)
File "/usr/lib64/python2.6/logging/__init__.py", line 1305, in debug
self.logger.debug(msg, *args, **kwargs)
File "/usr/lib64/python2.6/logging/__init__.py", line 1044, in debug
self._log(DEBUG, msg, args, **kwargs)
File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
self.handle(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
self.callHandlers(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in callHandlers
hdlr.handle(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
self.emit(record)
File "/usr/lib64/python2.6/logging/handlers.py", line 776, in emit
msg = self.format(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
return fmt.format(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib64/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
File "/usr/share/vdsm/vm.py", line 69, in __str__
if not a.startswith('__')]
File "/usr/share/vdsm/libvirtvm.py", line 1012, in blockDev
if self._blockDev is None:
AttributeError: 'Drive' object has no attribute '_blockDev'
Dafna, can you please see if this patch fix the problem (replacing None with an empty string)?
$ git diff
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
index 3306cc2..3e793b6 100644
--- a/vdsm/libvirtvm.py
+++ b/vdsm/libvirtvm.py
@@ -1014,13 +1014,13 @@ class Drive(LibvirtVmDevice):
if self.device in ("cdrom", "floppy"):
self._blockDev = False
else:
- self._blockDev = None
+ self._blockDev = ""
self._customize()
@property
def blockDev(self):
- if self._blockDev is None:
+ if self._blockDev is "":
try:
self._blockDev = utils.isBlockDevice(self.path)
Created attachment 614243 [details] log and lkibvirtvm.py with change tested the change as shown in diff of comment 4 and the vm's still failed migrate on attr. attaching logs and libvirtvm.py that was changed. In the case VDSM is trying to set read-only property blockDev of the Drive class. We have two ways to fix it: http://gerrit.ovirt.org/#/c/8185/ — make the property read-write http://gerrit.ovirt.org/#/c/8186/ — skip all read-only properties The second way has been chosen: http://gerrit.ovirt.org/#/c/8186/ merged u/s as 40a9289e92ebc05ec3b3a42b4ac650f12d5ba5b2 vdsm -37: almost (perhaps a separate bug?)
Thread-202164::INFO::2012-09-27 17:05:54,058::libvirtvm::367::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Migration Progress: 60 seconds
elapsed, 88% of data processed, 88% of mem processed
Thread-202163::DEBUG::2012-09-27 17:05:54,063::libvirtvm::316::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::setting migration downtime to
200
libvirtEventLoop::DEBUG::2012-09-27 17:05:56,294::libvirtvm::1401::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::even
t Suspended detail 0 opaque None
Thread-202160::DEBUG::2012-09-27 17:05:59,352::libvirtvm::322::vm.Vm::(cancel) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::canceling migration downti
me thread
Thread-298::ERROR::2012-09-27 17:05:59,353::utils::377::vm.Vm::(collect) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stats function failed: <Advanced
StatsFunction _sampleDiskLatency at 0x1509070>
Traceback (most recent call last):
File "/usr/share/vdsm/utils.py", line 373, in collect
statsFunction()
File "/usr/share/vdsm/utils.py", line 250, in __call__
retValue = self._function(*args, **kwargs)
File "/usr/share/vdsm/libvirtvm.py", line 142, in _sampleDiskLatency
libvirt_qemu.VIR_DOMAIN_QEMU_MONITOR_COMMAND_DEFAULT)
File "/usr/lib64/python2.6/site-packages/libvirt_qemu.py", line 37, in qemuMonitorCommand
if ret is None: raise libvirt.libvirtError('virDomainQemuMonitorCommand() failed')
libvirtError: Requested operation is not valid: domain is not running
libvirtEventLoop::DEBUG::2012-09-27 17:05:59,353::libvirtvm::1401::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::even
t Stopped detail 3 opaque None
Thread-202160::DEBUG::2012-09-27 17:05:59,353::libvirtvm::372::vm.Vm::(stop) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::stopping migration monitor thread
Thread-202163::DEBUG::2012-09-27 17:05:59,354::libvirtvm::319::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::migration downtime thread exiting
Thread-202160::DEBUG::2012-09-27 17:05:59,384::vm::742::vm.Vm::(setDownStatus) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Changed state to Down: Migration succeeded
Thread-202160::DEBUG::2012-09-27 17:05:59,384::utils::314::vm.Vm::(stop) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stop statistics collection
Thread-298::DEBUG::2012-09-27 17:05:59,385::utils::345::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stats thread finished
For above: [root@master-vds1 ~]# rpm -qa |grep -E "vdsm|libvirt" |sort libvirt-0.9.10-21.el6_3.5.x86_64 libvirt-client-0.9.10-21.el6_3.5.x86_64 libvirt-python-0.9.10-21.el6_3.5.x86_64 vdsm-4.9.6-37.0.el6_3.x86_64 vdsm-cli-4.9.6-37.0.el6_3.noarch vdsm-python-4.9.6-37.0.el6_3.x86_64 Created attachment 627554 [details]
vdsm-37 migration failure
Comment on attachment 627554 [details]
vdsm-37 migration failure
Disregard - it looks like the VDSM that caused the failure was still -36, not -37.
Will verify again.
OK, re-verified - works OK with vdsm-37 (migrated from source running -112). Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-1508.html |
Created attachment 611474 [details] logs Description of problem: I'm testing upgrade from 3.0 to 3.1 after upgrading vdsm on host I tried migrating the vm's from the host with 3.0 vdsm to the host installed with 3.1 vdsm and we are failing with AttributeError. Version-Release number of selected component (if applicable): si17 vdsm-4.9.6-32.0.el6_3.x86_64 vdsm-4.9-113.3.el6_3.x86_64 How reproducible: 100% Steps to Reproduce: 1. create a 3.0 DC+Cluster with running vm's 2. migrate or run all vm's on spm 3. put hsm in maintenance 4. yum update vdsm to vdsm-4.9.6-32 + vdsm restart 5. activate the host and try to migrate the vm's Actual results: we are getting AttributeError Expected results: we should be able to migrate the vm's Additional info: vdsm log