Bug 684586

Summary: VDSM: "destroy Called" command fails to clean VM's from source host
Product: Red Hat Enterprise Linux 5 Reporter: Dafna Ron <dron>
Component: vdsm22Assignee: Dan Kenigsberg <dkenigsb>
Status: CLOSED DUPLICATE QA Contact: yeylon <yeylon>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 5.6CC: abaron, bazulay, danken, iheim, srevivo, ykaul
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-03-14 09:07:38 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
logs none

Description Dafna Ron 2011-03-13 16:33:21 UTC
Created attachment 484015 [details]
logs

Description of problem:

after storage connectivity issues in source host, VM is migrated but "destroy Called" does not clean VM's on source side. 

Version-Release number of selected component (if applicable):

ic104

How reproducible:
100%

Steps to Reproduce:
1. run VM 
2. block connectivity to storage from host using iptables
3. wait for migration to end and check VM status on both destination and source hosts (vdsClient -s 0 list table)
  
Actual results:

the destroy is called but the VM appears as down on destination and up on source

Expected results:

after migration VM should only appear as up in destination

Additional info: full logs from both destination and source attached. 


source:

[root@blond-vdsf tmp]# vdsClient -s 0 list table
12511bc1-b724-4252-b117-178b43adfc13   5347  test1                Down      


destination: 

[root@blond-vdsg ~]# vdsClient -s 0 list table
12511bc1-b724-4252-b117-178b43adfc13  32152  test1                Up             


##############################################################################

Thread-218::INFO::2011-03-13 17:16:52,089::dispatcher::101::irs::Run and protect: teardownVolume, Return response: {'status': {'message': 'OK', 'code': 0}}
Thread-220::DEBUG::2011-03-13 17:16:52,200::vm::1309::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::destroy Called
QMon-144::ERROR::2011-03-13 17:16:52,330::QemuMonitor::268::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::QemuMonitor
Traceback (most recent call last):
  File "/usr/share/vdsm/QemuMonitor.py", line 266, in _work
    BasicQemuMonitor._work(self)
  File "/usr/share/vdsm/QemuMonitor.py", line 192, in _work
    self._sock.sendall(monq._input + '\n')
  File "<string>", line 1, in sendall
error: (32, 'Broken pipe')
QMon-144::INFO::2011-03-13 17:16:52,331::vm::695::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::Monitor stopped
QMon-144::DEBUG::2011-03-13 17:16:52,331::vm::1434::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::Changed state to Down: User shut down
Thread-145::ERROR::2011-03-13 17:16:52,330::guestIF::289::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::Unexpected exception: Traceback (most recent call last):
  File "/usr/share/vdsm/guestIF.py", line 281, in run
    s, leftover = self._readMessage(leftover)
  File "/usr/share/vdsm/guestIF.py", line 254, in _readMessage
    s = self._sock.recv(self.READSIZE)
error: (104, 'Connection reset by peer')

Thread-146::ERROR::2011-03-13 17:16:52,335::utils::427::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::Traceback (most recent call last):
  File "/usr/share/vdsm/utils.py", line 421, in run
    self._samples.append(self.sample())
  File "/usr/share/vdsm/vm.py", line 134, in sample
    s = VmSample(self._pid, self._ifids, self._vm)
  File "/usr/share/vdsm/vm.py", line 71, in __init__
    self.hdssample = HdsSample(vm)
  File "/usr/share/vdsm/vm.py", line 49, in __init__
    for line in vm._sendMonitorCommand('info blockstats').splitlines():
  File "/usr/share/vdsm/vm.py", line 1176, in _sendMonitorCommand
    out = self._mon.sendCommand(command, timeout)
  File "/usr/share/vdsm/QemuMonitor.py", line 130, in sendCommand
    return monq.wait(timeout)
  File "/usr/share/vdsm/QemuMonitor.py", line 51, in wait
    raise self._exception
error: (32, 'Broken pipe')

Thread-220::DEBUG::2011-03-13 17:16:53,363::vm::1361::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::Total desktops after destroy of 34c5e64e-2c27-472b-afda-b41caabfd1bc is 1
Thread-220::DEBUG::2011-03-13 17:16:53,364::vm::1365::vds.vmlog.34c5e64e-2c27-472b-afda-b41caabfd1bc::qemu stdouterr: QEMU waiting for connection on: unix:/var/vdsm/34c5e64e-2c27-472b-afda-b41caabfd1bc.monitor.socket,server
QEMU waiting for connection on: unix:/var/vdsm/34c5e64e-2c27-472b-afda-b41caabfd1bc.guest.socket,server

Thread-220::INFO::2011-03-13 17:16:53,365::dispatcher::95::irs::Run and protect: teardownVolume, args: ( sdUUID=bfa3761e-13a1-4915-93a4-c99767af3248 spUUID=c7307264-9e99-4db3-b1:

Comment 1 Dan Kenigsberg 2011-03-14 09:07:38 UTC
Dafna, this sounds much like bug 677728. Please reopen if you think it is anyhow different from destroy() failing due to nonexistent storage.

*** This bug has been marked as a duplicate of bug 677728 ***