Bug 1036680 - [vdsm] Disk hotlpug/hotunplug fails in case the disk was live migrated before
Summary: [vdsm] Disk hotlpug/hotunplug fails in case the disk was live migrated before
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.3.0
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.3.0
Assignee: Yeela Kaplan
QA Contact: Leonid Natapov
URL:
Whiteboard: storage
Depends On:
Blocks: 3.3rc1
TreeView+ depends on / blocked
 
Reported: 2013-12-02 13:18 UTC by Elad
Modified: 2016-02-10 20:22 UTC (History)
10 users (show)

Fixed In Version: is29
Doc Type: Bug Fix
Doc Text:
A disk that was live migrated between storage domains could not be hotplugged or hotunplugged to or from virtual machines. Now, VDSM updates the domains list for disks attached to virtual machines after a live migration, so the disks can be successfully hotplugged or hotunplugged.
Clone Of:
Environment:
Last Closed: 2014-01-21 16:22:15 UTC
oVirt Team: Storage
Target Upstream Version:
scohen: Triaged+


Attachments (Terms of Use)
logs (487.06 KB, application/x-gzip)
2013-12-02 13:18 UTC, Elad
no flags Details
logs2 (relevant logs) (1.86 MB, application/x-gzip)
2013-12-02 16:30 UTC, Elad
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0040 0 normal SHIPPED_LIVE vdsm bug fix and enhancement update 2014-01-21 20:26:21 UTC
oVirt gerrit 22536 0 None None None Never
oVirt gerrit 22562 0 None None None Never

Description Elad 2013-12-02 13:18:17 UTC
Created attachment 831570 [details]
logs

Description of problem:
Disk that was live migrated between storage domains, cannot be hotplugged/hotunplugged to/from VM. Note that hotplug/unplug works fine in case the disk is part of a snapshot, so the hotplug/unplug fails because of another phase of the LSM.

Version-Release number of selected component (if applicable):
is25
vdsm-4.13.0-0.10.beta1.el6ev.x86_64
rhevm-3.3.0-0.37.beta1.el6ev.noarch


How reproducible:
100%

Steps to Reproduce:
Reproduced on a block storage with more than 1 SDs 
1. create a VM and install OS 
2. add a second disk to the VM and hotplug it to the VM
3. Move the new disk to another SD (live migration)
4. hotunplug the disk from the VM

Actual results:
The operation fails on vdsm:

Thread-48016::DEBUG::2013-12-02 15:05:43,673::vm::3487::vm.Vm::(hotunplugDisk) vmId=`28ae983b-1815-4a27-b751-4ddd4af5b90a`::Hotunplug disk xml: <disk device="disk" snapshot="no" type="block">
        <address bus="0x00" domain="0x0000" function="0x0" slot="0x09" type="pci"/>
        <source dev="/rhev/data-center/mnt/blockSD/3bbf19c4-3a0d-4a91-883a-9824245659ee/images/b1a4778d-354e-4720-9ec3-928002bd6483/07accae5-a751-42c9-bbc2-a3fcd7286ebb"/>
        <target bus="virtio" dev="vdd"/>
        <serial>b1a4778d-354e-4720-9ec3-928002bd6483</serial>
        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
</disk>

Thread-48016::ERROR::2013-12-02 15:05:43,673::BindingXMLRPC::1003::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 989, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 272, in vmHotunplugDisk
    return vm.hotunplugDisk(params)
  File "/usr/share/vdsm/API.py", line 454, in hotunplugDisk
    return curVm.hotunplugDisk(params)
  File "/usr/share/vdsm/vm.py", line 3490, in hotunplugDisk
    self.sdIds.remove(drive.domainID)
ValueError: list.remove(x): x not in list


Note that disk hotunplug operation will fail also (need to have a disk that was migrated while it was deactivated and the VM was running)

Expected results:
Hotplug/unplug should work 

Additional info: logs

Comment 1 Elad 2013-12-02 16:30:38 UTC
Created attachment 831685 [details]
logs2 (relevant logs)

wrong logs where uploaded, attaching the correct ones (logs2)

Comment 5 Leonid Natapov 2013-12-29 14:03:08 UTC
is29. fixed. tested according steps to reproduce.

Comment 6 errata-xmlrpc 2014-01-21 16:22:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0040.html


Note You need to log in before you can comment on or make changes to this bug.