Bug 717847 - There are virDomainGetBlockInfo errors when vdsmd restarts
Summary: There are virDomainGetBlockInfo errors when vdsmd restarts
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Igor Lvovsky
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-30 07:31 UTC by Rami Vaknin
Modified: 2016-04-18 06:40 UTC (History)
10 users (show)

Fixed In Version: vdsm-4.9-82
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-06 07:25:24 UTC
Target Upstream Version:


Attachments (Terms of Use)
VDSM Logs (1.62 MB, application/x-gzip)
2011-06-30 07:31 UTC, Rami Vaknin
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2011:1782 0 normal SHIPPED_LIVE new packages: vdsm 2011-12-06 11:55:51 UTC

Description Rami Vaknin 2011-06-30 07:31:42 UTC
Created attachment 510591 [details]
VDSM Logs

Environment:
RHEVM 3.0 on dev env, backend on RHEL6, last commit 47b2a33cbd4fa2baacace7eab126bac850a793d6
vdsm-4.9-79.el6.x86_64, libvirt-0.9.2-1.el6.x86_64 (note that it also reproduced with vdsm-4.9-75.el6.x86_64.rpm)

When vdsmd is started, the links to the VMs disks are removed and restored after few seconds, in the time that the links don't exist all the calls to virDomainGetBlockInfo fail since the path to the VMs' disks does not exist.

From VDSM Logs:

Thread-11::DEBUG::2011-06-30 10:17:51,479::hsm::204::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center'
Thread-11::DEBUG::2011-06-30 10:17:51,480::hsm::233::Storage.HSM::(__cleanStorageRepository) White list is ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt', '/rhev/data-center/mnt/filer01.qa.lab.tl
v.redhat.com:/export/rami/export/rest', '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest/*'].
Thread-11::DEBUG::2011-06-30 10:17:51,480::hsm::235::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers.
Thread-11::WARNING::2011-06-30 10:17:51,515::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 13] Permission denied: '/
rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export').
Thread-11::WARNING::2011-06-30 10:17:51,516::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 13] Permission denied: '/
rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami').
Thread-11::WARNING::2011-06-30 10:17:51,516::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 13] Permission denied: '/
rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export').
Thread-11::WARNING::2011-06-30 10:17:51,518::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 39] Directory not empty: 
'/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:').
Thread-11::WARNING::2011-06-30 10:17:51,518::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 20] Not a directory: '/rh
ev/data-center/763b481d-cd78-4cd8-931d-510cbd48f278/6f55553f-b208-4cf4-8ddc-0d59bc78b133').
Thread-11::WARNING::2011-06-30 10:17:51,519::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 20] Not a directory: '/rh
ev/data-center/763b481d-cd78-4cd8-931d-510cbd48f278/2fae691e-6219-4087-a575-0e340660a338').
Thread-11::WARNING::2011-06-30 10:17:51,519::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 20] Not a directory: '/rh
ev/data-center/763b481d-cd78-4cd8-931d-510cbd48f278/mastersd').
Thread-11::WARNING::2011-06-30 10:17:51,520::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 20] Not a directory: '/rh
ev/data-center/763b481d-cd78-4cd8-931d-510cbd48f278/495b1cf6-ea0f-4f73-8482-a953887ecab0').
Thread-11::WARNING::2011-06-30 10:17:51,520::hsm::266::Storage.HSM::(__cleanStorageRepository) Could not delete dir '/rhev/data-center/mnt/filer01.qa.lab.tlv.redhat.com:/export/rami/export/rest' (OSError: [Errno 39] Directory not empty: 
'/rhev/data-center/763b481d-cd78-4cd8-931d-510cbd48f278').
Thread-11::DEBUG::2011-06-30 10:17:51,520::hsm::268::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
Thread-11::INFO::2011-06-30 10:17:51,521::hsm::394::Storage.HSM::(_restorePool) RESTOREPOOL: 763b481d-cd78-4cd8-931d-510cbd48f278
Thread-11::INFO::2011-06-30 10:17:51,521::sp::404::Storage.StoragePool::(reconnect) Trying to reconnect to pool: 763b481d-cd78-4cd8-931d-510cbd48f278
Thread-11::INFO::2011-06-30 10:17:51,522::sp::337::Storage.StoragePool::(connect) Connect host #1 to the storage pool 763b481d-cd78-4cd8-931d-510cbd48f278 with master domain: 2fae691e-6219-4087-a575-0e340660a338 (ver = 1)
Thread-11::DEBUG::2011-06-30 10:17:51,522::misc::1000::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-11::DEBUG::2011-06-30 10:17:51,523::misc::1002::SamplingMethod::(__call__) Got in to sampling method
Thread-11::DEBUG::2011-06-30 10:17:51,523::misc::1000::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-11::DEBUG::2011-06-30 10:17:51,523::misc::1002::SamplingMethod::(__call__) Got in to sampling method
Thread-11::DEBUG::2011-06-30 10:17:51,524::iscsi::700::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
Thread-43::ERROR::2011-06-30 10:17:51,538::utils::371::vm.Vm::(collect) vmId=`f4a75002-56e5-4338-af59-5daa861a13c9`::Stats function failed: <AdvancedStatsFunction _highWrite at 0x27a3378>
Traceback (most recent call last):
  File "/usr/share/vdsm/utils.py", line 368, in collect
    statsFunction()
  File "/usr/share/vdsm/utils.py", line 252, in __call__
    retValue = self._function(*args, **kwargs)
  File "/usr/share/vdsm/libvirtvm.py", line 61, in _highWrite
    self._vm._dom.blockInfo(vmDrive.path, 0)
  File "/usr/share/vdsm/libvirtvm.py", line 302, in f
    ret = attr(*args, **kwargs)
  File "/usr/share/vdsm/libvirtconnection.py", line 63, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1047, in blockInfo
    if ret is None: raise libvirtError ('virDomainGetBlockInfo() failed', dom=self)

Comment 3 Dan Kenigsberg 2011-06-30 08:20:41 UTC
I'm dropping the Regression noise-word, since this bug has no functional effect. It just dirties the log.

P.S. according to Ayal, vdsm should not be waiting for connectStoragePool - it should reconnect automatically after restart.

Comment 4 Igor Lvovsky 2011-06-30 14:32:13 UTC
No reason for 'bloker' flag here, dropping it.

Comment 5 Igor Lvovsky 2011-07-13 14:34:21 UTC
http://gerrit.usersys.redhat.com/#change,697

Comment 7 Tomas Dosek 2011-07-19 10:57:04 UTC
Verified - vdsm-4.9-82.el6 - Above described scenario no longer reproduces on vdsm-4.9-82.el6.

Comment 8 errata-xmlrpc 2011-12-06 07:25:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2011-1782.html


Note You need to log in before you can comment on or make changes to this bug.