RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1302965 - libvirtd hang sometimes while starting up lots of parallel libvirt instances
Summary: libvirtd hang sometimes while starting up lots of parallel libvirt instances
Keywords:
Status: CLOSED DUPLICATE of bug 1348936
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-29 07:20 UTC by yafu
Modified: 2016-06-28 18:07 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-28 18:07:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
backtrace of all stack frames (108.52 KB, text/plain)
2016-01-29 07:51 UTC, yafu
no flags Details

Description yafu 2016-01-29 07:20:31 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 yafu 2016-01-29 07:37:11 UTC
(In reply to yafu from comment #0)
> Description of problem:
> 
> 
> Version-Release number of selected component (if applicable):
> 
> 
> How reproducible:
> 
> 
> Steps to Reproduce:
> 1.
> 2.
> 3.
> 
> Actual results:
> 
> 
> Expected results:
> 
> 
> Additional info:

Sorry for wrong clicking. The description is as follows:

Description of problem:
Using test-virt-alignment-scan-guests.sh in the libguestfs (https://github.com/libguestfs/libguestfs/blob/master/align/test-virt-alignment-scan-guests.sh, the script will start up lots of parallel libvirt instances), and the libvirtd daemon hang because of deadlock caused by parallel domain event handle.

Version-Release number of selected component (if applicable):
libvirt-1.2.17-13.el7_2.3.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.4.x86_64

How reproducible:
sometimes

Steps to Reproduce:
1.Run test-virt-alignment-scan-guests.sh in the libguestfs:
# while true ; do ./test-virt-alignment-scan-guests.sh ; done

2.Check the output of virsh list at the same time:
#watch virsh list

3.After about 20 hours, the libvirtd daemon hang and the output of virsh list did not change any more.

4.Using gdb to print the libvirtd backtrace:
(gdb)bt
#0  0x00007f80b0deaf4d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f80b0de6d02 in _L_lock_791 () from /lib64/libpthread.so.0
#2  0x00007f80b0de6c08 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007f80b3779ba5 in virMutexLock (m=<optimized out>) at util/virthread.c:89
#4  0x00007f80b376122e in virObjectLock (anyobj=<optimized out>) at util/virobject.c:323
#5  0x00007f809abe223c in qemuProcessHandleEvent (mon=<optimized out>, vm=0x7f805c003aa0, eventName=0x7f80b61f6e10 "SHUTDOWN", seconds=1454045255, micros=634477, details=0x0, 
    opaque=0x7f8090101fc0) at qemu/qemu_process.c:645
#6  0x00007f809abfd81e in qemuMonitorEmitEvent (mon=mon@entry=0x7f805c00cb20, event=event@entry=0x7f80b61f6e10 "SHUTDOWN", seconds=1454045255, micros=634477, details=0x0)
    at qemu/qemu_monitor.c:1255
#7  0x00007f809ac11e4c in qemuMonitorJSONIOProcessEvent (obj=0x7f80b62151a0, mon=0x7f805c00cb20) at qemu/qemu_monitor_json.c:165
#8  qemuMonitorJSONIOProcessLine (msg=0x0, line=<optimized out>, mon=0x7f805c00cb20) at qemu/qemu_monitor_json.c:202
#9  qemuMonitorJSONIOProcess (mon=mon@entry=0x7f805c00cb20, data=0x7f80b621f620 "{\"timestamp\": {\"seconds\": 1454045255, \"microseconds\": 634477}, \"event\": \"SHUTDOWN\"}\r\n", 
    len=85, msg=msg@entry=0x0) at qemu/qemu_monitor_json.c:244
#10 0x00007f809abfbff3 in qemuMonitorIOProcess (mon=0x7f805c00cb20) at qemu/qemu_monitor.c:455
#11 qemuMonitorIO (watch=watch@entry=154073, fd=<optimized out>, events=0, events@entry=1, opaque=opaque@entry=0x7f805c00cb20) at qemu/qemu_monitor.c:709
#12 0x00007f80b3738967 in virEventPollDispatchHandles (fds=<optimized out>, nfds=<optimized out>) at util/vireventpoll.c:509
#13 virEventPollRunOnce () at util/vireventpoll.c:658
#14 0x00007f80b3737032 in virEventRunDefaultImpl () at util/virevent.c:308
#15 0x00007f80b387f035 in virNetDaemonRun (dmn=0x7f80b6208860) at rpc/virnetdaemon.c:701
#16 0x00007f80b4444524 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1577


Actual results:


Expected results:
The libvirtd process should not hang while starting up lots of parallel libvirt instances.

Additional info:

Comment 3 yafu 2016-01-29 07:49:05 UTC
Please see the libvirtd process's backtrace of all stack frames in the attachment.

Comment 4 yafu 2016-01-29 07:51:04 UTC
Created attachment 1119375 [details]
backtrace of all stack frames

Comment 5 Jiri Denemark 2016-01-29 11:22:05 UTC
Could you provide debug logs from libvirtd and ideally also full core dump of the stuck libvirtd?

Comment 6 yafu 2016-02-01 04:38:41 UTC
(In reply to Jiri Denemark from comment #5)
> Could you provide debug logs from libvirtd and ideally also full core dump
> of the stuck libvirtd?

Since the log_level is 3 while libvirtd hang, I can not provide debug logs from libvirtd.
Please see the full core dump of the libvirtd in the attachment.

Comment 7 Jiri Denemark 2016-06-28 18:07:21 UTC

*** This bug has been marked as a duplicate of bug 1348936 ***


Note You need to log in before you can comment on or make changes to this bug.