RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 876993 - qemu-kvm: vm's become non-responsive during migrate disk load from 2 domains to a 3ed
Summary: qemu-kvm: vm's become non-responsive during migrate disk load from 2 domains ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Paolo Bonzini
QA Contact: Raz Tamir
Jiri Herrmann
URL:
Whiteboard:
Depends On:
Blocks: 1173188 1359965 1364808
TreeView+ depends on / blocked
 
Reported: 2012-11-15 13:38 UTC by Dafna Ron
Modified: 2017-03-21 09:34 UTC (History)
26 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.499.el6
Doc Type: Bug Fix
Doc Text:
Quiescing disks after virtual disk migration no longer causes the guest to stop responding When a high number of virtual disk migrations was active at the same time, the guest virtual machine in some cases became unresponsive, because the *QEMU* service was attempting to quiesce all disks on the guest. With this update, *QEMU* only quiesces the source disk whose migration is finishing, which prevents the problem from occurring.
Clone Of:
Environment:
Last Closed: 2017-03-21 09:34:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (3.95 MB, application/x-gzip)
2012-11-15 13:38 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0621 0 normal SHIPPED_LIVE Moderate: qemu-kvm security and bug fix update 2017-03-21 12:28:31 UTC

Description Dafna Ron 2012-11-15 13:38:20 UTC
Created attachment 645648 [details]
logs

Description of problem:

I moved ~20 vm's disks (20 different vm's) from 2 domains to a 3ed domain. 
the vm's become non-responsive. 
Federico investigrated with libvirt and qemu people and it seems that libvirt is waiting for replay from qemu on block stats queries which causes the vm's to become non-responsive. 

Version-Release number of selected component (if applicable):

qemu-img-rhev-0.12.1.2-2.295.el6_3.5.x86_64
libvirt-0.9.10-21.el6_3.5.x86_64
vdsm-4.9.6-42.0.el6_3.x86_64

How reproducible:


Steps to Reproduce:

create the following setup: 

create 3 iscsi storage domains 100GB each
create 20 pool vm's from XP OS template on 1 domain
create 2 vm's from the same template, as thin provision and one as clone of template on the second domain

1. Run all vms on 2 hosts
2. select each vm and move the disk to the 3ed domain
3.
  
Actual results:

vm's become non-responsive because queries to qemu are delayed due to the load


Expected results:

vm's should not become non-responsive

Additional info:
logs

Comment 2 Ademar Reis 2012-12-20 19:54:58 UTC
Paolo: do you think it's just a matter of adjusting the timeouts (in libvirt), or should we change the behavior of the commands involved?

Comment 4 Chao Yang 2012-12-21 09:55:24 UTC
I tried with my rhevm, but cannot reproduce.

Scenario 1:
1. create VMs on iscsi storage A
2. start them
3. move disk to iscsi storage B through clicking 'Move' in 'Disks' tab
Result:
rhevm told me the disk have finished to move to another domain

Scenario 2:
Repeat above steps on NFS storage
Result:
rhevm told me have failed to move the disk to another domain

Will try more VMs to load stress and see if reproducible next week, also append some questions here.

Hi Dafna,
 Can you please tell:
1. if template is necessary to reproduce your issue? 
2. all of VMs became non respsonsive in your case, or just some of them?
3. in my case, I don't see any "__com.redhat_drive-reopen" event in libvirtd.log but "blockdev_snapshot-sync " as well as " __com.redhat_drive-mirror" when disk was moved to new domain. It seems failed to open the disk path. Is there any chance we hit same issue?

Comment 5 Chao Yang 2012-12-21 09:56:23 UTC
Any steps I am missing?

Comment 6 Dafna Ron 2012-12-23 08:29:02 UTC
(In reply to comment #5)
> Any steps I am missing?

yes. 

1. you need 3 domains not two (look at the bug's headline)
2. run about 20-30 vm's linked to template
3. install rhel and Win OS on the vms
4. use 2 hosts

Comment 7 Luiz Capitulino 2013-01-07 16:19:40 UTC
Dafna,

Are you running qemu-kvm-0.12.1.2-2.295.el6_3.10, qemu-kvm-0.12.1.2-2.344.el6 or later? The kvm-autotest guys were seeing a somewhat similar issue while testing the fix for bug 881732, so I think it's worth it to eliminate the possibility of a regression.

Comment 8 Dafna Ron 2013-01-07 16:21:35 UTC
this was tested on: 

Version-Release number of selected component (if applicable):

qemu-img-rhev-0.12.1.2-2.295.el6_3.5.x86_64
libvirt-0.9.10-21.el6_3.5.x86_64
vdsm-4.9.6-42.0.el6_3.x86_64

Comment 9 Luiz Capitulino 2013-01-07 17:26:01 UTC
No qemu-kvm version listed. I could guess it's 0.12.1.2-2.295.el6_3.5 because of qemu-img, but it would be good to confirm it.

Comment 10 Dafna Ron 2013-01-08 08:20:37 UTC
(In reply to comment #9)
> No qemu-kvm version listed. I could guess it's 0.12.1.2-2.295.el6_3.5
> because of qemu-img, but it would be good to confirm it.

qemu-kvm version is 0.12.1.2-2.295.el6_3.5

Comment 11 Luiz Capitulino 2013-01-08 11:00:47 UTC
Ok, so this has nothing to do with bug 881732. Thanks.

Comment 12 Stefan Hajnoczi 2013-01-08 16:50:42 UTC
The query-blockstats command is very cheap to execute inside QEMU, it does not acquire resources or perform blocking operations.  There are two issues that could prevent QEMU from replying:
1. The host is under heavy load and the QEMU process is not being scheduled.
2. A QEMU thread is not releasing the global mutex, causing the monitor (iothread) to become unresponsive.

Due to the test environment requirements (oVirt, 20 Windows XP VMs, multiple hosts, etc) I cannot easily reproduce the problem to dig deeper.  It would be easiest if you can provide access to a machine.

From the bug report it's not clear whether only libvirt is hung or whether both libvirt and QEMU are hung.  What does "the vm's become non-responsive" mean?  Does the guest stop responding to network ping?

What happens at the libvirt when you "move the disk to the 3ed domain"?  Is this invoking the libvirt blockpull command?

Please post the thread backtraces for both libvirtd and the qemu process of a hung VM.  You can collect this by installing the debuginfo packages for libvirtd and qemu-kvm.  Then use "ps aux" to find the libvirtd and relevant qemu processes and the pstack(1) command to collect backtraces.

Thanks,
Stefan

Comment 13 Chao Yang 2013-01-09 08:12:03 UTC
I have been working on this issue these days. But have no luck to reproduce this issue. I have 18VMs in total running on 1st storage domain and 2 on 2nd. Then move all os disks to 3rd storage domain. After waiting for a while, most of them migrated successfully, some failed on error below[failure-1] but still in UP state. 2 VMs PAUSED on error bellow[failure-2].

***[failure-1]***
" �<11>vdsm vm.Vm ERROR vmId=`5333c042-6140-437a-9b5d-94f4fce776ba`::Unable to prepare the volume path for the disk: vdb#012Traceback (most recent call last):#012  File "/usr/share/vdsm/libvirtvm.py", line 1860, in snapshot#012    self.cif.prepareVolumePath(newDrives[vmDevName])#012  File "/usr/share/vdsm/clientIF.py", line 278, in prepareVolumePath#012    raise vm.VolumeError(drive)#012VolumeError: Bad volume specification {'domainID': 'a6fb82ba-8ed1-4665-aa18-6c683e6e5344', 'name': u'vdb', 'format': 'cow', 'volumeID': '8abf7f6b-df39-4e78-895c-95978fc83313', 'imageID': 'e3b3c87a-9a52-4220-9755-83e312045ad8', 'poolID': '2a58c701-8bab-4613-8b33-2d8c862f7355', 'device': 'disk'}"

***[failure-2]***
Failed to complete Snapshot Auto-generated for Live Storage Migration of xxx creation for VM xxxx
User admin@internal have failed to move the disk xxx to domain xxx

Related packages:
qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
libvirt-0.10.2-11.el6.x86_64
vdsm-4.10.2-1.0.el6.x86_64

Steps:
1. start 18 VMs on 1st storage domain on host1
2. start 2 VMs on 2nd storage domain on host2, VMs are created from the same template
3. move OS disk to 3nd storage domain one by one

I will retry again this scenario: all VMs created from template.

Comment 14 Dafna Ron 2013-01-09 08:20:08 UTC
(In reply to comment #13)
> I have been working on this issue these days. But have no luck to reproduce
> this issue. I have 18VMs in total running on 1st storage domain and 2 on
> 2nd. Then move all os disks to 3rd storage domain. After waiting for a
> while, most of them migrated successfully, some failed on error
> below[failure-1] but still in UP state. 2 VMs PAUSED on error
> bellow[failure-2].
> 
> ***[failure-1]***
> " �<11>vdsm vm.Vm ERROR vmId=`5333c042-6140-437a-9b5d-94f4fce776ba`::Unable
> to prepare the volume path for the disk: vdb#012Traceback (most recent call
> last):#012  File "/usr/share/vdsm/libvirtvm.py", line 1860, in snapshot#012 
> self.cif.prepareVolumePath(newDrives[vmDevName])#012  File
> "/usr/share/vdsm/clientIF.py", line 278, in prepareVolumePath#012    raise
> vm.VolumeError(drive)#012VolumeError: Bad volume specification {'domainID':
> 'a6fb82ba-8ed1-4665-aa18-6c683e6e5344', 'name': u'vdb', 'format': 'cow',
> 'volumeID': '8abf7f6b-df39-4e78-895c-95978fc83313', 'imageID':
> 'e3b3c87a-9a52-4220-9755-83e312045ad8', 'poolID':
> '2a58c701-8bab-4613-8b33-2d8c862f7355', 'device': 'disk'}"
> 
> ***[failure-2]***
> Failed to complete Snapshot Auto-generated for Live Storage Migration of xxx
> creation for VM xxxx
> User admin@internal have failed to move the disk xxx to domain xxx
> 
> Related packages:
> qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
> libvirt-0.10.2-11.el6.x86_64
> vdsm-4.10.2-1.0.el6.x86_64
> 
> Steps:
> 1. start 18 VMs on 1st storage domain on host1
> 2. start 2 VMs on 2nd storage domain on host2, VMs are created from the same
> template
> 3. move OS disk to 3nd storage domain one by one
> 
> I will retry again this scenario: all VMs created from template.


it sounds like you are creating an independent vms (not linked to the template) 
also, run the vms sporadically - I did not assign a specific vm to a specific host

Comment 15 Paolo Bonzini 2013-01-09 14:29:57 UTC
From earlier discussion (on IRC) with Federico and Dafna, the reason is that there is extremely high I/O load.

Flush operations time out in the guest, and as a result qemu_aio_flush() is called from the IDE layer.  These take up to 60-90 seconds, during which the monitor is blocked.

I asked Dafna at the time about logs from the targets, or network usage statistics, or anything that could help understanding the root cause.

Comment 16 Chao Yang 2013-01-10 03:52:26 UTC
Retried with Dafna's instructions:
1. copy templates to 3 storage domains
2. create pool and define the number of VMs linked to template, target 1st storage domain
3. repeat step 1 with different template, target 2nd storage domain
4. move all of disks to 3rd storage domain one by one

Actual Result:
ALL failed to move to 3rd storage domain. Failure log like below[failure].

I will retry again with less VMs but high i/o in them then update here.

***[failure]***
2013-Jan-10, 10:31
	
User admin@internal have failed to move the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:30
	
User admin@internal failed to move the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:30
	
User admin@internal failed to move the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:27
	
User admin@internal moving the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:27
	
User admin@internal moving the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:26
	
User admin@internal moving the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:26
	
User admin@internal moving the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:26
	
Snapshot Auto-generated for Live Storage Migration of a-rhel6u4-64bit_Disk1 creation for VM pool-B-rhel6u4-3 has been completed.
	
	
2013-Jan-10, 10:25
	
User admin@internal moving the disk a-rhel6u4-64bit_Disk1 to domain storage-on-amd.
	
	
2013-Jan-10, 10:25
	
Snapshot Auto-generated for Live Storage Migration of a-rhel6u4-64bit_Disk1 creation for VM pool-B-rhel6u4-3 was initiated by admin@internal.
	
64f0d3f5
	
2013-Jan-09, 17:53
	
VM pool-B-rhel6u4-3 started on Host hp-dl385g7-11
	
7175b86a
	
2013-Jan-09, 17:52
	
VM pool-B-rhel6u4-3 was started by admin@internal (Host: hp-dl385g7-11).
	
7175b86a
	
2013-Jan-09, 17:45
	
VM pool-B-rhel6u4-3 was removed from VM Pool pool-B-rhel6u4 by admin@internal.
	
7f899c12
	
2013-Jan-09, 17:44
	
VM pool-B-rhel6u4-3 creation has been completed.
	
7b3488f4
	
2013-Jan-09, 17:44
	
VM pool-B-rhel6u4-3 creation was initiated by admin@internal.
	
7b3488f4

Comment 17 Chao Yang 2013-01-10 03:54:28 UTC
(In reply to comment #16)
> Retried with Dafna's instructions:
> 1. copy templates to 3 storage domains
> 2. create pool and define the number of VMs linked to template, target 1st
> storage domain
> 3. repeat step 1 with different template, target 2nd storage domain

Correction: repeat step 2 with different template, target 2nd storage domain
20 VMs on 1st storage domain, 10 on 2nd one
> 4. move all of disks to 3rd storage domain one by one

Comment 18 Paolo Bonzini 2013-01-11 17:35:07 UTC
What storage did you use?  If iSCSI, was the target Linux too?  Can you gather any logs or even just network usage statistics?

Is it possible to get a setup that I can use from my home office to reproduce (and ssh into all RHEV-H machines + hopefully iSCSI targets too)?

Comment 22 juzhang 2013-01-15 02:14:06 UTC
Hi, Paolo

Would you please have a look comment21? Any suggestion to KVM QE for reproducing this bug? Thanks.

Comment 23 Paolo Bonzini 2013-01-15 10:03:34 UTC
Hi juzhang,

at this point I'm not even sure this is a qemu-kvm bug.  It may be simply that migration is exhausting the I/O or network bandwidth.  After I manage to reproduce it on chayang's infrastructure (probably tomorrow), I'll summarize what's going on.

Comment 24 juzhang 2013-01-15 10:53:12 UTC
(In reply to comment #23)
> Hi juzhang,
> 
> at this point I'm not even sure this is a qemu-kvm bug.  It may be simply
> that migration is exhausting the I/O or network bandwidth.  After I manage
> to reproduce it on chayang's infrastructure (probably tomorrow), I'll
> summarize what's going on.

Ok, Please update in this bz if you need KVM QE further testing.

Comment 31 Paolo Bonzini 2013-05-17 08:25:21 UTC
Dafna, chayang, can you please confirm that the source and destination LUNs are mapped to _different_ disks, i.e. that you are not doing a "fake" migration?

Comment 32 Dafna Ron 2013-05-19 08:03:33 UTC
(In reply to comment #31)
> Dafna, chayang, can you please confirm that the source and destination LUNs
> are mapped to _different_ disks, i.e. that you are not doing a "fake"
> migration?

I'm not doing fake migration - the luns in dst and src are different.

Comment 37 Ademar Reis 2013-10-01 18:58:44 UTC
Reassigning it back to Paolo.

Comment 55 Paolo Bonzini 2015-12-18 14:08:27 UTC
Yes, it's certainly reproducible.  QEMU is happily sending a lot of flush commands, which are very expensive.  I might have a fix.

Comment 56 Aharon Canan 2015-12-28 10:16:43 UTC
Managed to reproduce 

From engine.log
===============
2015-12-28 10:05:07,767 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-36) [2a9f7c7] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM VM-19 is not responding.
2015-12-28 10:05:07,827 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-36) [2a9f7c7] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM VM-20 is not responding.
2015-12-28 10:05:07,927 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-36) [2a9f7c7] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM VM-14 is not responding.
2015-12-28 10:05:08,027 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-36) [2a9f7c7] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM VM-15 is not responding.
2015-12-28 10:05:08,059 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-36) [2a9f7c7] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM VM-6 is not responding.
2015-12-28 10:05:08,164 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-36) [2a9f7c7] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM VM-4 is not responding.



Reproduction steps 
==================
create the following setup: 
create 3 iscsi storage domains 100GB each
create 20 pool vm's from template on 1 domain
create 2 vm's from the same template, as thin provision and one as clone of template on the second domain

1. Run all vms on 2 hosts
2. Move the disks to the 3ed domain


Versions
========
vdsm-4.16.31-1.el6ev.x86_64
libvirt-0.10.2-54.el6_7.2.x86_64
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.3.x86_64
qemu-img-rhev-0.12.1.2-2.479.el6_7.3.x86_64
RHEL 6.7 - kernel-2.6.32-573.el6.x86_64

Comment 59 Paolo Bonzini 2016-10-21 12:50:45 UTC
The patches for this are ready, and I will submit them upstream in a week or two, for inclusion in QEMU 2.8.

Comment 63 Paolo Bonzini 2016-12-16 11:59:00 UTC
Cause: completing a disk migration tried to quiesce all disks

Consequence: when many migrations were in progress at the same time, quiescence took a long time and could cause I/O aborts in the guest

Fix: completing a particular disk migration only quiesce the source of that migration job

Result: quiescence doesn't take a long time to achieve

Comment 64 Yash Mankad 2016-12-22 20:18:30 UTC
Fix included in qemu-kvm-0.12.1.2-2.499.el6

Comment 66 Qianqian Zhu 2016-12-27 01:48:48 UTC
Hi Aharo,

Would you please help verify this bz? Since kvm qe was not able to reproduce this bz, and also we have no proper environment for now.
Many thanks.

Qianqian

Comment 67 Raz Tamir 2016-12-27 06:59:55 UTC
I replaced Aharon in managing the storage QE. 
This bug is verified, non of the VMs bocome non-responsive during migration

Comment 68 Chao Yang 2016-12-28 01:08:08 UTC
(In reply to Raz Tamir from comment #67)
> I replaced Aharon in managing the storage QE. 
> This bug is verified, non of the VMs bocome non-responsive during migration

Thanks, Raz

Comment 71 errata-xmlrpc 2017-03-21 09:34:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0621.html


Note You need to log in before you can comment on or make changes to this bug.