RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 855922 - vdsm: vm's fail to migrate from host with vdsm-4.9-113.3 to host with vdsm-4.9.6-32 due to "can't set attribute" error (cluster level still 3.0)
Summary: vdsm: vm's fail to migrate from host with vdsm-4.9-113.3 to host with vdsm-4....
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Saveliev Peter
QA Contact: Haim
URL:
Whiteboard: virt
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-10 15:36 UTC by Dafna Ron
Modified: 2022-07-09 05:39 UTC (History)
14 users (show)

Fixed In Version: vdsm-4.9.6-37.0
Doc Type: Bug Fix
Doc Text:
Previously, when attempting to upgrade the VDSM from 3.0 to 3.1, migrating virtual machines from 3.0 hosts failed with an AttributeError. This has been corrected so that virtual machines can be migrated from hosts running VDSM 3.0 to hosts running VDSM 3.1.
Clone Of:
Environment:
Last Closed: 2012-12-04 19:11:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (1.41 MB, application/x-gzip)
2012-09-10 15:36 UTC, Dafna Ron
no flags Details
vdsm-37 migration failure (631.57 KB, application/x-xz)
2012-10-15 16:31 UTC, Yaniv Kaul
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2012:1508 0 normal SHIPPED_LIVE Important: rhev-3.1.0 vdsm security, bug fix, and enhancement update 2012-12-04 23:48:05 UTC

Description Dafna Ron 2012-09-10 15:36:43 UTC
Created attachment 611474 [details]
logs

Description of problem:

I'm testing upgrade from 3.0 to 3.1 
after upgrading vdsm on host I tried migrating the vm's from the host with 3.0 vdsm to the host installed with 3.1 vdsm and we are failing with AttributeError. 

Version-Release number of selected component (if applicable):

si17
vdsm-4.9.6-32.0.el6_3.x86_64
vdsm-4.9-113.3.el6_3.x86_64

How reproducible:

100%

Steps to Reproduce:
1. create a 3.0 DC+Cluster with running vm's
2. migrate or run all vm's on spm
3. put hsm in maintenance
4. yum update vdsm to  vdsm-4.9.6-32 + vdsm restart
5. activate the host and try to migrate the vm's  
  
Actual results:

we are getting AttributeError

Expected results:

we should be able to migrate the vm's

Additional info: vdsm log

Comment 2 Haim 2012-09-10 16:02:04 UTC
we added some debug prints inside the code in order to understand root cause and it appears as if drives list during migration sent by the older vdsm doesn't contain _blockDev (set to None instead of False). 

Thread-14::DEBUG::2012-09-10 18:03:20,523::vm::60::vm.Vm::(__init__) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::### args = {'poolID': '854f5121-8550-4582-bb73-fce7d57a569c', 'needExtend': False, 'if': 'ide', 'in
dex': '0', 'iface': 'ide', 'apparentsize': '1073741824', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005', 'readonly': 'False', 'shared': False, 'truesize': '1073741824', 'type': 'disk', 'domainID': '8d366beb-41f
e-4a57-b0dd-4b5e1c5cfd76', 'reqsize': '0', 'format': 'cow', 'address': {u'bus': u'0', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'}, 'device': 'disk', 'path': '/rhev/data-center/854f5121-
8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'propagateErrors': 'off', 'name': u'hda', 'volumeID': 'cf70ebdc-b
d1c-4738-afaf-2908768ddeec', 'alias': u'ide0-0-0', 'volumeChain': [{'path': '/rhev/data-center/854f5121-8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/
e886fb6e-f91f-46ae-a488-0f24e47f5731', 'domainID': '8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76', 'volumeID': 'e886fb6e-f91f-46ae-a488-0f24e47f5731', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005'}, {'path': '/rhev/da
ta-center/854f5121-8550-4582-bb73-fce7d57a569c/8d366beb-41fe-4a57-b0dd-4b5e1c5cfd76/images/c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005/cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'domainID': '8d366beb-41fe-4a57-b0dd-4b5e1c5cf
d76', 'volumeID': 'cf70ebdc-bd1c-4738-afaf-2908768ddeec', 'imageID': 'c84fa1d3-1f95-40b9-bebb-7a6ee4f4e005'}]}
Thread-14::INFO::2012-09-10 18:03:20,524::vm::584::vm.Vm::(_startUnderlyingVm) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::Skipping errors on recovery
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 577, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1358, in _run
    self._devices[devType].append(devClass(self.conf, self.log, **dev))
  File "/usr/share/vdsm/libvirtvm.py", line 996, in __init__
    LibvirtVmDevice.__init__(self, conf, log, **kwargs)
  File "/usr/share/vdsm/vm.py", line 62, in __init__
    log.debug('### obj =  %s , attr = %s, value = %s', self,attr,value)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1305, in debug
    self.logger.debug(msg, *args, **kwargs)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1044, in debug
    self._log(DEBUG, msg, args, **kwargs)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
    self.handle(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
    self.callHandlers(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in callHandlers
    hdlr.handle(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
    self.emit(record)
  File "/usr/lib64/python2.6/logging/handlers.py", line 776, in emit
    msg = self.format(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
    return fmt.format(record)
  File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
    record.message = record.getMessage()
  File "/usr/lib64/python2.6/logging/__init__.py", line 306, in getMessage
    msg = msg % self.args
  File "/usr/share/vdsm/vm.py", line 69, in __str__
    if not a.startswith('__')]
  File "/usr/share/vdsm/libvirtvm.py", line 1012, in blockDev
    if self._blockDev is None:
AttributeError: 'Drive' object has no attribute '_blockDev'

Comment 4 Gal Hammer 2012-09-12 14:55:31 UTC
Dafna, can you please see if this patch fix the problem (replacing None with an empty string)?

$ git diff
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
index 3306cc2..3e793b6 100644
--- a/vdsm/libvirtvm.py
+++ b/vdsm/libvirtvm.py
@@ -1014,13 +1014,13 @@ class Drive(LibvirtVmDevice):
         if self.device in ("cdrom", "floppy"):
             self._blockDev = False
         else:
-            self._blockDev = None
+            self._blockDev = ""

         self._customize()

     @property
     def blockDev(self):
-        if self._blockDev is None:
+        if self._blockDev is "":
             try:
                 self._blockDev = utils.isBlockDevice(self.path)

Comment 7 Dafna Ron 2012-09-19 08:14:11 UTC
Created attachment 614243 [details]
log and lkibvirtvm.py with change

tested the change as shown in diff of comment 4 and the vm's still failed migrate on attr. 

attaching logs and libvirtvm.py that was changed.

Comment 8 Saveliev Peter 2012-09-25 07:20:41 UTC
In the case VDSM is trying to set read-only property blockDev of the Drive class. We have two ways to fix it:

http://gerrit.ovirt.org/#/c/8185/ — make the property read-write
http://gerrit.ovirt.org/#/c/8186/ — skip all read-only properties

Comment 9 Saveliev Peter 2012-09-26 12:06:43 UTC
The second way has been chosen: http://gerrit.ovirt.org/#/c/8186/

Comment 11 Michal Skrivanek 2012-10-01 09:15:07 UTC
merged u/s as 40a9289e92ebc05ec3b3a42b4ac650f12d5ba5b2

Comment 14 Yaniv Kaul 2012-10-15 16:27:44 UTC
vdsm -37: almost (perhaps a separate bug?)
Thread-202164::INFO::2012-09-27 17:05:54,058::libvirtvm::367::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Migration Progress: 60 seconds
 elapsed, 88% of data processed, 88% of mem processed
Thread-202163::DEBUG::2012-09-27 17:05:54,063::libvirtvm::316::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::setting migration downtime to
 200
libvirtEventLoop::DEBUG::2012-09-27 17:05:56,294::libvirtvm::1401::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::even
t Suspended detail 0 opaque None
Thread-202160::DEBUG::2012-09-27 17:05:59,352::libvirtvm::322::vm.Vm::(cancel) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::canceling migration downti
me thread
Thread-298::ERROR::2012-09-27 17:05:59,353::utils::377::vm.Vm::(collect) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stats function failed: <Advanced
StatsFunction _sampleDiskLatency at 0x1509070>
Traceback (most recent call last):
  File "/usr/share/vdsm/utils.py", line 373, in collect
    statsFunction()
  File "/usr/share/vdsm/utils.py", line 250, in __call__
    retValue = self._function(*args, **kwargs)
  File "/usr/share/vdsm/libvirtvm.py", line 142, in _sampleDiskLatency
    libvirt_qemu.VIR_DOMAIN_QEMU_MONITOR_COMMAND_DEFAULT)
  File "/usr/lib64/python2.6/site-packages/libvirt_qemu.py", line 37, in qemuMonitorCommand
    if ret is None: raise libvirt.libvirtError('virDomainQemuMonitorCommand() failed')
libvirtError: Requested operation is not valid: domain is not running
libvirtEventLoop::DEBUG::2012-09-27 17:05:59,353::libvirtvm::1401::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::even
t Stopped detail 3 opaque None
Thread-202160::DEBUG::2012-09-27 17:05:59,353::libvirtvm::372::vm.Vm::(stop) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::stopping migration monitor thread
Thread-202163::DEBUG::2012-09-27 17:05:59,354::libvirtvm::319::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::migration downtime thread exiting
Thread-202160::DEBUG::2012-09-27 17:05:59,384::vm::742::vm.Vm::(setDownStatus) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Changed state to Down: Migration succeeded
Thread-202160::DEBUG::2012-09-27 17:05:59,384::utils::314::vm.Vm::(stop) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stop statistics collection
Thread-298::DEBUG::2012-09-27 17:05:59,385::utils::345::vm.Vm::(run) vmId=`d2688c7a-d8e8-4463-b5f8-e4c3714d6622`::Stats thread finished

Comment 15 Yaniv Kaul 2012-10-15 16:30:26 UTC
For above:
[root@master-vds1 ~]# rpm -qa |grep -E "vdsm|libvirt" |sort
libvirt-0.9.10-21.el6_3.5.x86_64
libvirt-client-0.9.10-21.el6_3.5.x86_64
libvirt-python-0.9.10-21.el6_3.5.x86_64
vdsm-4.9.6-37.0.el6_3.x86_64
vdsm-cli-4.9.6-37.0.el6_3.noarch
vdsm-python-4.9.6-37.0.el6_3.x86_64

Comment 16 Yaniv Kaul 2012-10-15 16:31:53 UTC
Created attachment 627554 [details]
vdsm-37 migration failure

Comment 17 Yaniv Kaul 2012-10-16 07:57:23 UTC
Comment on attachment 627554 [details]
vdsm-37 migration failure

Disregard - it looks like the VDSM that caused the failure was still -36, not -37.
Will verify again.

Comment 18 Yaniv Kaul 2012-10-16 09:26:24 UTC
OK, re-verified - works OK with vdsm-37 (migrated from source running -112).

Comment 21 errata-xmlrpc 2012-12-04 19:11:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html


Note You need to log in before you can comment on or make changes to this bug.