Bug 601741 - [vdsm] [libvirt intg] restart libvirtd service during live migration results a corrupted system state
Summary: [vdsm] [libvirt intg] restart libvirtd service during live migration results ...
Status: CLOSED DUPLICATE of bug 622446
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm   
(Show other bugs)
Version: 6.1
Hardware: All Linux
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: Haim
Whiteboard: lvdsm & libvirt integration
Depends On:
Blocks: 581275
TreeView+ depends on / blocked
Reported: 2010-06-08 14:43 UTC by Haim
Modified: 2014-01-13 00:46 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2011-01-02 10:32:58 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description Haim 2010-06-08 14:43:10 UTC
Description of problem:

restart of libvirtd service during live migration on source host results a corrupted system state which characterized by 3 different behaviours:

note: it looks like vdsm and not libvirt bug (some of it) and it requires further investigation.  

1) vdsm doesn't respond for several minutes (via vdsClient) - manage to reproduce 3 times, not consistent. 

2) unable to run new vms on that host - this is true for first several minutes after event occurs, and I am unable to run new vms on that host

Thread-70::ERROR::2010-06-08 09:43:03,781::dispatcher::106::irs::Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 97, in run
    result = ctask.prepare(self.func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 1282, in prepare
    raise self.error
StoragePoolUnknown: Unknown pool id, pool not connected: ('606d043c-ef9c-4c6f-848b-5bd89325c78d',)

Thread-70::INFO::2010-06-08 09:43:03,782::vm::600::vds.vmlog.d9d4ddad-c8a0-4177-88c9-a38e17771a15::Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 574, in _execqemu
  File "/usr/share/vdsm/libvirtvm.py", line 536, in _run
    self._initDriveList(self.conf.get('drives', []))
  File "/usr/share/vdsm/vm.py", line 626, in _initDriveList
    drive['path'] = self._prepareVolumePath(drive)
  File "/usr/share/vdsm/vm.py", line 718, in _prepareVolumePath
    raise VolumeError(drive)
VolumeError: {'index': '0', 'domainID': 'd9124e52-d42a-4b0c-8657-523bc5b6733b', 'format': 'cow', 'volumeID': '9205859a-bc75-400d-b9c2-7a15d51
88c81', 'imageID': 'a6431da5-09b5-42b0-8c53-a0f454bc8925', 'poolID': '606d043c-ef9c-4c6f-848b-5bd89325c78d', 'propagateErrors': 'off', 'if': 

3) migration to that hosts fails on 'unknown failure' on source host: 

Thread-5051::ERROR::2010-06-08 10:11:09,977::vm::336::vds.vmlog.d9d4ddad-c8a0-4177-88c9-a38e17771a15::Unknown failure
Thread-5051::ERROR::2010-06-08 10:11:10,701::vm::457::vds.vmlog.d9d4ddad-c8a0-4177-88c9-a38e17771a15::Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 448, in run
  File "/usr/share/vdsm/libvirtvm.py", line 98, in _startUnderlyingMigration
    libvirt.VIR_MIGRATE_PEER2PEER, None, 0)
  File "/usr/share/vdsm/libvirtvm.py", line 126, in f
    raise e
libvirtError: Unknown failure

Version-Release number of selected component (if applicable): 


Steps to Reproduce:
1. create a new vm and make sure it runs over host 1
2. migrate that vm from host 1 to host 2 and run service libvirt restart -need to be very close to migrate command - suggest to use the following:  

host1# sleep 2 && service libvirtd restart 
start migrate command

Comment 2 RHEL Product and Program Management 2010-06-08 15:13:25 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for

Comment 4 Dan Kenigsberg 2011-01-02 10:32:58 UTC
this smells very much like a dup of bug 622446.

*** This bug has been marked as a duplicate of bug 622446 ***

Note You need to log in before you can comment on or make changes to this bug.