Bug 903305 - Live migration fails : SSL error
Live migration fails : SSL error
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.2.0
Unspecified Unspecified
urgent Severity high
: ---
: ---
Assigned To: Nobody's working on this, feel free to take it
infra virt
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-01-23 11:58 EST by Ohad Basan
Modified: 2014-10-30 18:34 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-03 08:39:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ohad Basan 2013-01-23 11:58:14 EST
Description of problem:
Live vm migration fails 

Dummy-423::DEBUG::2013-01-23 18:51:34,870::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.132579 s, 7.7 MB/s\n'
; <rc> = 0
Thread-2108::ERROR::2013-01-23 18:51:36,214::SecureXMLRPCServer::77::root::(handle_error) client ('10.35.16.40', 37821)
Traceback (most recent call last):
  File "/usr/lib64/python2.6/SocketServer.py", line 560, in process_request_thread
    self.finish_request(request, client_address)
  File "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line 68, in finish_request
    request.do_handshake()
  File "/usr/lib64/python2.6/ssl.py", line 279, in do_handshake
    self._sslobj.do_handshake()
SSLError: [Errno 1] _ssl.c:490: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown
Thread-2107::DEBUG::2013-01-23 18:51:36,225::task::568::TaskManager.Task::(_updateState) Task=`8f0ba144-29c3-403b-8df2-d1fd07511037`::moving from state init -> state preparing
Thread-2107::INFO::2013-01-23 18:51:36,225::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-2107::INFO::2013-01-23 18:51:36,226::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'b87d69b3-98a1-4371-8e9c-62db5b27a1aa': {'delay': '0.00906610488
892', 'lastCheck': '9.2', 'code': 0, 'valid': True}, 'a96d90b3-e44e-4600-b607-5947320f62a3': {'delay': '0.133712053299', 'lastCheck': '2.8', 'code': 0, 'valid': True}}
Thread-2107::DEBUG::2013-01-23 18:51:36,226::task::1151::TaskManager.Task::(prepare) Task=`8f0ba144-29c3-403b-8df2-d1fd07511037`::finished: {'b87d69b3-98a1-4371-8e9c-62db5b27a1aa': {'delay':
 '0.00906610488892', 'lastCheck': '9.2', 'code': 0, 'valid': True}, 'a96d90b3-e44e-4600-b607-5947320f62a3': {'delay': '0.133712053299', 'lastCheck': '2.8', 'code': 0, 'valid': True}}
Thread-2107::DEBUG::2013-01-23 18:51:36,226::task::568::TaskManager.Task::(_updateState) Task=`8f0ba144-29c3-403b-8df2-d1fd07511037`::moving from state preparing -> state finished
Thread-2107::DEBUG::2013-01-23 18:51:36,226::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-2107::DEBUG::2013-01-23 18:51:36,226::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-2107::DEBUG::2013-01-23 18:51:36,227::task::957::TaskManager.Task::(_decref) Task=`8f0ba144-29c3-403b-8df2-d1fd07511037`::ref 0 aborting False
Thread-2104::DEBUG::2013-01-23 18:51:36,314::libvirtvm::380::vm.Vm::(cancel) vmId=`1ed1c4ab-4592-488c-8194-49c7c825e739`::canceling migration downtime thread
Thread-2104::DEBUG::2013-01-23 18:51:36,314::libvirtvm::439::vm.Vm::(stop) vmId=`1ed1c4ab-4592-488c-8194-49c7c825e739`::stopping migration monitor thread
Thread-2105::DEBUG::2013-01-23 18:51:36,315::libvirtvm::377::vm.Vm::(run) vmId=`1ed1c4ab-4592-488c-8194-49c7c825e739`::migration downtime thread exiting
Thread-2104::ERROR::2013-01-23 18:51:36,315::vm::197::vm.Vm::(_recover) vmId=`1ed1c4ab-4592-488c-8194-49c7c825e739`::internal error process exited while connecting to monitor: qemu-kvm: -dri
ve file=/rhev/data-center/665be3c3-8283-43aa-9a60-7092e78bb388/a96d90b3-e44e-4600-b607-5947320f62a3/images/223cd5a8-722d-4066-820e-1f9b79f69eb5/d775cf3e-9fc7-4361-8009-61d93b876304,if=none,i
d=drive-virtio-disk0,format=qcow2,serial=223cd5a8-722d-4066-820e-1f9b79f69eb5,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/665be3c3-8283-43aa-9a
60-7092e78bb388/a96d90b3-e44e-4600-b607-5947320f62a3/images/223cd5a8-722d-4066-820e-1f9b79f69eb5/d775cf3e-9fc7-4361-8009-61d93b876304: Operation not permitted

Thread-2104::ERROR::2013-01-23 18:51:36,440::vm::285::vm.Vm::(run) vmId=`1ed1c4ab-4592-488c-8194-49c7c825e739`::Failed to migrate
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 270, in run
    self._startUnderlyingMigration()
  File "/usr/share/vdsm/libvirtvm.py", line 504, in _startUnderlyingMigration
    None, maxBandwidth)
  File "/usr/share/vdsm/libvirtvm.py", line 540, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 104, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2
    if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/665be3c3-8283-43aa-9a60-7092e78bb388/a96d90b3-e44e-4600-b607-5947320f62a3/ima
ges/223cd5a8-722d-4066-820e-1f9b79f69eb5/d775cf3e-9fc7-4361-8009-61d93b876304,if=none,id=drive-virtio-disk0,format=qcow2,serial=223cd5a8-722d-4066-820e-1f9b79f69eb5,cache=none,werror=stop,re
:



How reproducible:
always

Steps to Reproduce:
1.create a dc + two hosts + one vm
2.start vm 
3.click "migrate" to perform live migration
  
Actual results:
Fatal error during migration

Expected results:
migration finishes successfully. 

Additional info:
Comment 3 Barak 2013-01-27 06:13:26 EST
Does it reproduce 100% ?
Does the hosts have a proper DNS resolution (from each to the other ... both direction) ?
Comment 6 Yair Zaslavsky 2013-02-03 03:44:37 EST
Can this be checked on hosts that don'thave a dhcp lease issue?
Comment 7 Ohad Basan 2013-02-03 08:39:21 EST
Checked again.
was an env problem.
closing.

Note You need to log in before you can comment on or make changes to this bug.