RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1029632 - hosted engine | two qemu processes with the same VM ID runs on two different machines. Also the sanlock resource taken on two machines on the same storage.
Summary: hosted engine | two qemu processes with the same VM ID runs on two different ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1029629 (view as bug list)
Depends On: 1022924
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-12 19:18 UTC by Jan Kurik
Modified: 2014-03-24 09:48 UTC (History)
27 users (show)

Fixed In Version: libvirt-0.10.2-29.el6.1
Doc Type: Bug Fix
Doc Text:
When two clients try to start the same transient domain libvirt may not properly detect that the same domain is already being started. More than one QEMU processes may be running for the same domain while libvirt does not know about them. Fix: Libvirt was fixed to properly check if the same domain is not already being started. Result: Libvirt avoids starting more than one QEMU process for the same domain.
Clone Of:
Environment:
Last Closed: 2013-11-22 00:26:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1748 0 normal SHIPPED_LIVE libvirt bug fix update 2013-11-21 09:13:34 UTC

Description Jan Kurik 2013-11-12 19:18:39 UTC
This bug has been copied from bug #1022924 and has been proposed
to be backported to 6.5 z-stream (EUS).

Comment 5 Jan Kurik 2013-11-12 19:19:48 UTC
*** Bug 1029629 has been marked as a duplicate of this bug. ***

Comment 9 Jiri Denemark 2013-11-13 09:03:29 UTC
So the real NVR of the package that fixes this bug is libvirt-0.10.2-29.el6.1 It's a bit weired but rel-eng said there should be no problem with this NVR and it's not worth rebuilding just to give the package a better name.

Comment 10 zhenfeng wang 2013-11-13 13:07:51 UTC
I try to reproduce this bug this afternoon, just got the same libvirtd crash with the comment27 in bug 1022924  while do the same steps as that bug, However, not sure whether that steps were enough or not to reproduce this bug, if they can, i will verify this bug with that steps, if not, i hope the dev can give me more suggestion, thanks .

Comment 11 zhenfeng wang 2013-11-15 06:04:28 UTC
Hi jiri
I try to reproduce this bug with libvirt-0.10.2-29 ,the reproduce steps were the same with DB's comment in comment28 in bug 1022924 and i can ofen get the following two issue with his steps
1.The libvirtd crash      --there was a new bz 1030736 trace this right now
2.Orphaned QEMU's appearing   --this seem to the issue this bug describing

Then i update the libvirt to libvirt-0.10.2-29.el6.1, found the second issue has gone, however the first issue was still exsiting, so i think that maybe I have reproduced this bug with DB's method in comment28 in bug 1022924, right ? BTW, there was one official 0day build come out to fix this bug(libvirt-0.10.2-29.el6.1), also this bug has been ON_QA status, so i have to verify this bug ASAP,  on the other hand, i saw the DB's comment in comment 28 in bug 1022924 said that we'd need to fix the libivrtd crash bug 1030736 in order to properly test the fix for this bug though. So i'm confused that whether i just verify this bug for the second issue alone, then track the libvirtd crash in bug 1030736 or we didn't verify this bug until the bug 1030736 was fixed, so can you give me some suggestion ? thanks

Comment 12 zhenfeng wang 2013-11-15 11:08:21 UTC
Have confirm the comment 11 with jiri on irc, and  will just verify the Orphaned QEMU's appearing's issue on this bug and will track the libvirtd crash on bug 1030736

Verify this bug on libvirt-0.10.2-29.el6.1, first i can reproduce this bug on  libvirt-0.10.2-29, the reproduce steps as the follwoing
1. prepare a normal guest'xml
2. Connect the libvirtd server from 3 different remote host, then excute the following command, about 1 minute, the libvirtd server will be crashed, also we can see there were Orphaned QEMU's appearing, the libvirtd crash issue was tracked by bug 1030736
remote_client# for i in {1..1000};do virsh -c qemu+ssh://$libvirtd_serverip/system create rheltest3.xml; virsh -c qemu+ssh://$libvirtd_serverip/system destroy rheltest3;done

Also we can reproduce this issue with DB's step in comment28 in bug 1022924

open 3 terminal on local host ,then excute the following command on the 3 terminal
 #while /bin/true ; do virsh create demovm.xml ; virsh destory demovm ; done

3.After the step2's command run a bout 1~2 minutes, check the qemu, we could see Orphaned QEMU's appearing
# virsh list --all
 Id    Name                           State
----------------------------------------------------

#
# ps aux|grep qemu
qemu      8483  1.2  0.4 1609588 36712 ?       Sl   05:05   0:19 /usr/libexec/qemu-kvm -name rheltest3 -S -M rhel6.5.0 -enable-kvm -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 110bac1d-c864-6ab5-9599-48aa2d23a7e2 -nodefconfig -nodefaults -chardev 
---

Verify this bug with libvirt-0.10.2-29.el6.1, the verify steps as the following

step3
1~2 were the same with the reproduce steps
3.After the step2's command run about 1~2 minutes, check the qemu, there was no
Orphaned QEMU's appearing 
# ps aux|grep qemu
root     30685  0.0  0.0 103252   840 pts/4    S+   06:04   0:00 grep qemu

4. re-do the step3 for 10 times, then check the qemu, there was also no Orphaned QEMU's appearing, so mark this bug verifed, and i will retest this bug while the bug 1030736 fixed.

Comment 14 errata-xmlrpc 2013-11-22 00:26:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1748.html

Comment 15 Tuomo Soini 2013-12-04 07:39:57 UTC
Source code for this update is not available on ftp.redhat.com.

Comment 16 Michal Privoznik 2013-12-04 08:35:28 UTC
(In reply to Tuomo Soini from comment #15)
> Source code for this update is not available on ftp.redhat.com.

This bug is for 6.5.z. The z-stream updates are distributed only to customers with subscription. Via other means than ftp.redhat.com.

Comment 17 Tuomo Soini 2013-12-04 08:50:02 UTC
6.5 is the current version. I'd undestand this for 6.4.z.


Note You need to log in before you can comment on or make changes to this bug.