Bug 813752 - RFE: Make sure the device is really hot unplugged from guest
RFE: Make sure the device is really hot unplugged from guest
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.4
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Osier Yang
Virtualization Bugs
: FutureFeature
: 843016 846651 (view as bug list)
Depends On: 813748 1090918 1093033
Blocks:
  Show dependency treegraph
 
Reported: 2012-04-18 07:13 EDT by Daniel Berrange
Modified: 2015-04-12 14:29 EDT (History)
21 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 813748
Environment:
Last Closed: 2013-08-13 11:47:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Daniel Berrange 2012-04-18 07:13:43 EDT
+++ This bug was initially created as a clone of Bug #813748 +++

Description of problem:

Hot-unplugging a block device is a tricky affair: we issue device_del from the monitor, then wait for the guest to stop using the device, and then we can go ahead and remove the associated drive.

This process can take any amount of time, and may not complete at all.

This doesn't work well with s4.  Consider the following sequence of events:

1. (host) device_del <block device>
2. (guest) enter s4 state

At this stage, we don't know whether the guest had stopped using the device.  So at the time of resuming the guest, should the device be attached, or should it not be?

Exposing this information to libvirt is necessary for correct guest operation to continue.

This event will be desirable in non-s4 cases as well, for the mgmt software to safely determine that a guest has stopped using the drive, and to use it for other purposes.
Comment 12 Laine Stump 2012-08-21 15:53:19 EDT
*** Bug 846651 has been marked as a duplicate of this bug. ***
Comment 13 IBM Bug Proxy 2012-08-22 03:50:52 EDT
------- Comment From onmahaja@in.ibm.com 2012-08-22 07:33 EDT-------
IMHO, this is not a bug

from what I learnt from the code  -
there are three states associated with attaching/detaching devices -

VIR_DOMAIN_AFFECT_CURRENT -> specifies that the device allocation is made based on current domain state.
VIR_DOMAIN_AFFECT_LIVE -> 	specifies that the device shall be allocated to the active domain instance only
and is not added to the persisted domain configuration.
VIR_DOMAIN_AFFECT_CONFIG->  specifies that the device shall be allocated to the persisted domain
configuration only.

So, in the case of attach-disk/detach-disk -
(1) you attach the disk with --persistent option ( which maps to VIR_DOMAIN_AFFECT_CONFIG and
hence your configuration is applied to persisted domain configuration only)
(2) Now, we detach the disk without --persistent option ( which maps to  VIR_DOMAIN_AFFECT_LIVE and
hence our configuration is applied to current domain state only and not to persisted
domain configuration) this is the root cause of this error.
(3) Now, you attach the disk with --persistent option ( which maps to VIR_DOMAIN_AFFECT_CONFIG ,
same configuration as in (1) is applied on the persisted domain configration
and hence fails because it is not removed removed from the persisted domain config in (2).

Please try with either "--persisted" in all the steps or without "--persistent" in all the above
three steps.

Tried reproduce the bug , and it worked -

[images]# virsh list --all
Id    Name                           State
----------------------------------------------------
4     vm3                            running

[images]# virsh attach-disk 4  /tmp/test.img vdb --persistent
Disk attached successfully

[images]# virsh detach-disk 4  vdb
Disk detached successfully

[images]# virsh attach-disk 4  /tmp/test.img vdb --persistent
error: Failed to attach disk
error: invalid argument: target vdb already exists.

as expected ...

Now let us try with all "--persistent" option -
[images]# virsh attach-disk 4  /tmp/test.img vdc --persistent
Disk attached successfully

[images]# virsh detach-disk 4 vdc --persistent
Disk detached successfully

[images]# virsh attach-disk 4  /tmp/test.img vdc --persistent
Disk attached successfully

Also , if the disk is formatted after attachment and detached and reattached the results
are similar -

[images]# virsh list --all
Id    Name                           State
----------------------------------------------------
6     rhel7a2                        running

[images]# virsh attach-disk 6 /tmp/test.img vdc --persistent
Disk attached successfully
=============================================================
Formatted the newly attached disk with ext4fs from the guest, mounted it
and now detaching -

[images]# virsh detach-disk 6 vdc --persistent
Disk detached successfully

I can see some block I/O error messages on the guest console , which is expected , since
we are umounting the mounted filesystem

Now again re-attaching the disk
[images]# virsh attach-disk 6 /tmp/test.img vdc --persistent
Disk attached successfully

also on host - I loop mounted the disk on host to verify if the filesystem created on the guest exists
on this image - yes it does -

[images]# mount -o loop  /tmp/test.img temp/
[images]# cd temp/
[temp]# ls
lost+found

Now try without "--persistent" : SAME RESULTS -

[temp]# virsh list --all
Id    Name                           State
----------------------------------------------------
9     rhel7a2                        running

[temp]# virsh attach-disk 9 /tmp/test2.img vde
Disk attached successfully

Format disk in guest and mount , and then detach
[temp]# virsh detach-disk 9 vde
Disk detached successfully

[temp]# virsh attach-disk 9 /tmp/test2.img vde
Disk attached successfully

This needs to be properly documented somewhere for future use.

-- Onkar

------- Comment From onmahaja@in.ibm.com 2012-08-22 07:45 EDT-------
Summary of comment 17 :

Please try with either "--persistent" in all the steps [ step1) attach, step2) detach and then step3) attach ]  or  try without  "--persistent" in all the above three steps.  In former case , your configuration will be persisted across domain power cycling and in the latter your attachment/detachment will not persist across domain power cycling.

-- Onkar
Comment 14 IBM Bug Proxy 2012-08-28 00:50:29 EDT
------- Comment From onmahaja@in.ibm.com 2012-08-28 04:43 EDT-------
Hi Redhat,
Following discussion in comment 17 & comment 18 can we proceed towards closure of this
bug as NOT_A_BUG ??

-- Onkar
Comment 16 IBM Bug Proxy 2012-09-03 01:00:28 EDT
------- Comment From onmahaja@in.ibm.com 2012-09-03 04:55 EDT-------
Hi Redhat,
There is no response on this BZ for long time - I assume that this bug can me CLOSED and my suggestions in Comment 17 and Comment 18 are accepted. So proceeding towards closure of this bug.

-- Onkar

------- Comment From onmahaja@in.ibm.com 2012-09-03 04:56 EDT-------
Hi Redhat,
There is no response on this BZ for long time - I assume that this bug can me CLOSED and my suggestions in Comment 17 and Comment 18 are accepted. So proceeding towards closure of this bug for now.

-- Onkar
Comment 17 Dave Allan 2012-09-04 09:56:18 EDT
Onkar, what BZs are your comment 17 and comment 18 in?
Comment 18 Osier Yang 2012-09-25 05:35:15 EDT
*** Bug 843016 has been marked as a duplicate of this bug. ***
Comment 19 Osier Yang 2012-09-25 05:55:25 EDT
Watching an event from qemu actually is just part of the solution, so changed
the bug subject to make it more clear.
Comment 21 Dave Allan 2013-08-13 11:47:49 EDT
The underlying qemu work is not going to make 6.5, so I'm closing as WONTFIX.
Comment 22 Xuesong Zhang 2014-07-23 03:13:37 EDT
Verify this bug on rhel6.6, since the underlying qemu bug 813748 is fixed now.

Package version:
libvirt-0.10.2-41.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.430.el6.x86_64
kernel-2.6.32-492.el6.x86_64

Steps:
1. hot-plug disks with --persistent
# virsh attach-disk rhel66 /var/lib/libvirt/images/test.img vdb --persistent
Disk attached successfully

2. login the guest, mount the hot-plug disk:
# mount /dev/vdb /mnt
# ls /mnt
lost+found

3. on the host, hot-unplug the disk:
# virsh detach-disk rhel66 vdb
Disk detached successfully

4, checking in the guest, the hot-plug disk is disappear.

5. checking the libvirtd log, make sure the new qemu command "{ "execute": "query-pci" }" and new qemu event ""event": "DEVICE_DELETED" is including:

# tailf /var/log/libvirt/libvirtd_debug.log
......
2014-07-23 06:24:30.148+0000: 9105: debug : qemuMonitorIOWrite:463 : QEMU_MONITOR_IO_WRITE: mon=0x7f9ecc01cc30 buf={"execute":"query-pci","id":"libvirt-6"}^M
 len=42 ret=42 errno=11
......
2014-07-23 07:10:58.051+0000: 9105: debug : qemuMonitorIOProcess:355 : QEMU_MONITOR_IO_PROCESS: mon=0x7f9ed0055510 buf={"timestamp": {"seconds": 1406099458, "microseconds": 51584}, "event": "DEVICE_DELETED", "data": {"device": "virtio-disk1"}}
......

Note You need to log in before you can comment on or make changes to this bug.