Bug 980054 - [LOG][vdsm] KeyError: 'domainID' during teardownImage in power-off to VM
Summary: [LOG][vdsm] KeyError: 'domainID' during teardownImage in power-off to VM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.3.0
Hardware: x86_64
OS: Unspecified
unspecified
low
Target Milestone: ---
: 3.4.0
Assignee: Sergey Gotliv
QA Contact: Elad
URL:
Whiteboard: storage
: 967602 (view as bug list)
Depends On:
Blocks: rhev3.4beta 1142926
TreeView+ depends on / blocked
 
Reported: 2013-07-01 10:19 UTC by Elad
Modified: 2018-12-03 19:16 UTC (History)
19 users (show)

Fixed In Version: vdsm-4.10.2-22.0.el6ev sf17.2 ovirt-3.4.0-beta3
Doc Type: Bug Fix
Doc Text:
With this update, VDSM now correctly logs errors encountered during the teardownImage action when powering off virtual machines instead of raising an exception.
Clone Of:
: 1009826 (view as bug list)
Environment:
Last Closed: 2014-06-09 13:24:57 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
sgotliv: needinfo-


Attachments (Terms of Use)
logs (1.16 MB, application/x-gzip)
2013-07-01 10:19 UTC, Elad
no flags Details
logs (second reproduction) (716.15 KB, application/x-gzip)
2013-07-15 11:08 UTC, Elad
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0504 0 normal SHIPPED_LIVE vdsm 3.4.0 bug fix and enhancement update 2014-06-09 17:21:35 UTC
oVirt gerrit 21973 0 None None None Never
oVirt gerrit 24467 0 None None None Never

Description Elad 2013-07-01 10:19:09 UTC
Created attachment 767315 [details]
logs

Description of problem:
Power-off to VM is throws an error:

Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 341, in teardownVolumePath
    res = self.irs.teardownImage(drive['domainID'],
  File "/usr/share/vdsm/vm.py", line 1317, in __getitem__
    raise KeyError(key)
KeyError: 'domainID'


happened in two different setups

Version-Release number of selected component (if applicable):
vdsm-4.11.0-69.gitd70e3d5.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. power-off VM

Actual results:
We get a KeyError every time we power-off a VM


Additional info:
logs

Comment 1 Federico Simoncelli 2013-07-09 09:40:59 UTC
This is has been fixed in I8ba965 and I68bc49 (bug 962549):

cfb37d4 Deactivate libvirtVM.Drives()
d0af44b Support teardownVolumePath(None)

Current code can handle a missing domainID key.

Anyway you were using a non-official build: in fact d70e3d5 is not a valid hash in the code base (probably a temporary scratch build with some custom patches).

Comment 2 Elad 2013-07-15 11:05:58 UTC
We still get the ERROR after power off to VM:

Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 332, in teardownVolumePath
    res = self.irs.teardownImage(drive['domainID'],
  File "/usr/share/vdsm/vm.py", line 1317, in __getitem__
    raise KeyError(key)
KeyError: 'domainID'
Thread-59::DEBUG::2013-07-15 14:00:28,054::clientIF::330::vds::(teardownVolumePath) ### drive VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:<bound method 

full log will be attached

Comment 3 Elad 2013-07-15 11:08:35 UTC
Created attachment 773688 [details]
logs (second reproduction)

Comment 4 Ayal Baron 2013-07-16 14:16:49 UTC
*** Bug 967602 has been marked as a duplicate of this bug. ***

Comment 6 Sean Cohen 2013-07-25 14:59:40 UTC
Fede, can you please take a look the logs provided in comment 3?

Comment 7 Federico Simoncelli 2013-07-25 15:37:26 UTC
Edu, you already fixed this right? Are we still missing something for some special case?

Comment 8 Sergey Gotliv 2013-07-31 19:28:09 UTC
According to the log, this is a cdrom.

Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 331, in teardownVolumePath
    res = self.irs.teardownImage(drive['domainID'],
  File "/usr/share/vdsm/vm.py", line 1344, in __getitem__
    raise KeyError(key)
KeyError: 'domainID'
Thread-2266::WARNING::2013-07-31 22:05:00,067::clientIF::337::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method Drive._checkIoTuneCategories of <vm.Drive object at 0x7fdd90502c90>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7fdd90502c90>> _deviceXML:<disk device="cdrom" snapshot="no" type="file"><source file="" startupPolicy="optional"/><target bus="ide" dev="hdc"/><readonly/><serial></serial></disk> _makeName:<bound method Drive._makeName of <vm.Drive object at 0x7fdd90502c90>> _validateIoTuneParams:<bound method Drive._validateIoTuneParams of <vm.Drive object at 0x7fdd90502c90>> address:{u'bus': u'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'} alias:ide0-1-0 apparentsize:0 blockDev:False cache:none conf:{'status': 'Up', 'acpiEnable': 'true', 'emulatedMachine': 'rhel6.4.0', 'vmId': '7f7f13c2-d9d7-4fe1-8a3f-426c2ad60feb', 'pid': '14911', 'memGuaranteedSize': 1024, 'timeOffset': '0', 'keyboardLayout': 'en-us', 'displayPort': u'5900', 'displaySecurePort': u'5

Comment 10 Michal Skrivanek 2013-09-19 08:12:54 UTC
IIUC in cleanupDrives() it's running through whole _devices[DISK_DEVICES] list and there is a cdrom(and floppy). Missing something like a drive[device]=='disk' check?

Comment 11 Sergey Gotliv 2013-09-19 13:01:25 UTC
(In reply to Michal Skrivanek from comment #10)
> IIUC in cleanupDrives() it's running through whole _devices[DISK_DEVICES]
> list and there is a cdrom(and floppy). Missing something like a
> drive[device]=='disk' check?
Its not good enough, direct lun is a disk but a VDSM image, teardownVolumePath should be called for VDSM images only.

Comment 12 Michal Skrivanek 2013-09-19 13:03:08 UTC
alright, then that's why there is the isVdsmImage(), no?

Comment 13 Sergey Gotliv 2013-09-19 13:06:06 UTC
Protik,

Why do you think this bug is blocking another one?

The real issue was resolved already, see comment 1. Now we are dealing with the log issue only.

Comment 14 Sergey Gotliv 2013-09-19 13:08:48 UTC
Pratik,

My appologies, I misspelled your name, sorry!

Comment 17 Pavel Zhukov 2013-09-24 07:46:34 UTC
vdsm-4.10.2-25.0.el6ev.x86_64 is affected. 

Thread-6203::WARNING::2013-09-15 08:33:37,709::clientIF::340::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _customize:<bound method Drive._customize of <libvirtvm.Drive object at 0x7f570820ab50>> _makeName:<bound method Drive._makeName of <libvirtvm.Drive object at 0x7f570820ab50>....
Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 334, in teardownVolumePath
    res = self.irs.teardownImage(drive['domainID'],
  File "/usr/share/vdsm/libvirtvm.py", line 1071, in __getitem__
    raise KeyError(key)
KeyError: 'domainID'

Comment 18 Nir Soffer 2013-09-24 11:57:10 UTC
(In reply to Federico Simoncelli from comment #1)
> This is has been fixed in I8ba965 and I68bc49 (bug 962549):
> 
> cfb37d4 Deactivate libvirtVM.Drives()
> d0af44b Support teardownVolumePath(None)

Those fixes do not fix the KeyError mentioned in this bug.

Comment 19 Nir Soffer 2013-09-24 12:01:41 UTC
This is just a log issue during shutdown, caused by trying to tear down non-vdsm image (CD ROM in the attached log). There is no effect on the shutdown process or report about real error related to this log message.

Comment 20 Elad 2014-02-19 16:33:43 UTC
teardownImage does not trigger a KeyError traceback in vdsm.log as part of power-off to VM.

Verified using ovirt-3.4-beta3:
vdsm-4.14.3-0.el6.x86_64
ovirt-engine-3.4.0-0.11.beta3.el6.noarch

Comment 23 errata-xmlrpc 2014-06-09 13:24:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0504.html


Note You need to log in before you can comment on or make changes to this bug.