Bug 1356883 - libvirtd crash seen, while attempting creation VM snapshots
Summary: libvirtd crash seen, while attempting creation VM snapshots
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: pre-dev-freeze
: ---
Assignee: Martin Kletzander
QA Contact: Han Han
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2
TreeView+ depends on / blocked
 
Reported: 2016-07-15 08:19 UTC by SATHEESARAN
Modified: 2019-04-27 02:28 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
RHEV-RHGS HC
Last Closed: 2017-01-25 09:50:13 UTC


Attachments (Terms of Use)
libvirtd coredump (705.95 KB, application/x-xz)
2016-07-15 09:10 UTC, SATHEESARAN
no flags Details
vdsm.log from host2 (2.68 MB, application/x-xz)
2016-07-15 09:13 UTC, SATHEESARAN
no flags Details
vdsm.log from host3 (2.76 MB, application/x-xz)
2016-07-15 09:14 UTC, SATHEESARAN
no flags Details
engine.log from hosted engine (3.34 MB, application/x-gzip)
2016-07-15 09:15 UTC, SATHEESARAN
no flags Details

Description SATHEESARAN 2016-07-15 08:19:54 UTC
Description of problem:
-----------------------
HC provides the custom script which helps in VM backup. This script takes snapshot of the VM one by one and syncs the snaps to the slave.

The setup consisted of 29 VMs. I have started dd workload on all the 29 application VMs. Immediately after that attempted to take snapshots of all the 29 VMs.

I observed that few snapshots failed and also a core generated in one of the node

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
libvirt-client-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64

RHEL 7.2 - Kernel - 3.10.0-327.22.2.el7.x86_64

qemu-kvm-tools-rhev-2.3.0-31.el7_2.16.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.16.x86_64
qemu-kvm-common-rhev-2.3.0-31.el7_2.16.x86_64

How reproducible:
-----------------
once out of 4 attempts

Steps to Reproduce:
-------------------
1. Create 2 sharded-replica_3-gluster volume backed glusterfs data domain
2. Create 29 VMs with their root disks on domain_1 and additional disk from domain_2
3. Start dd workload to write 100 files on all the 29 VMs
4. Initiate snapshot on all the VMs, soon after the workload is complete

Actual results:
---------------
Observed the following :
1. application VMs went to unknown state. Events said that VMs not responding
2. Communication issues on all the hosts - "VDSM host2 command failed: Message timeout which can be caused by communication issues"
3. libvirtd coredump found in one of the host

Expected results:
-----------------
Everything should be working good

Comment 1 SATHEESARAN 2016-07-15 08:21:31 UTC
Backtrace from the libvirtd coredump
-------------------------------------

Reading symbols from /usr/sbin/libvirtd...Reading symbols from /usr/lib/debug/usr/sbin/libvirtd.debug...done.
done.

warning: .dynamic section for "/usr/lib64/libsystemd.so.0.6.0" is not at the expected address (wrong library or version mismatch?)

warning: Can't read pathname for load map: Input/output error.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/libvirtd --listen'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f91affedafa in virNumaGetMaxCPUs () at util/virnuma.c:378
378	    return NUMA_MAX_N_CPUS;
Missing separate debuginfos, use: debuginfo-install sanlock-lib-3.2.4-2.el7_2.x86_64
(gdb) bt
#0  0x00007f91affedafa in virNumaGetMaxCPUs () at util/virnuma.c:378
#1  0x00007f91affedb41 in virNumaGetNodeCPUs (node=node@entry=0, cpus=cpus@entry=0x7f9193de2610) at util/virnuma.c:259
#2  0x00007f91b0093baf in nodeCapsInitNUMA (sysfs_prefix=sysfs_prefix@entry=0x0, caps=caps@entry=0x7f918c10c550) at nodeinfo.c:2122
#3  0x00007f919741a5d6 in virQEMUCapsInit (cache=0x7f918c146df0) at qemu/qemu_capabilities.c:1058
#4  0x00007f9197454040 in virQEMUDriverCreateCapabilities (driver=driver@entry=0x7f918c20a770) at qemu/qemu_conf.c:903
#5  0x00007f9197496121 in qemuStateInitialize (privileged=true, callback=<optimized out>, opaque=<optimized out>) at qemu/qemu_driver.c:862
#6  0x00007f91b0095ddf in virStateInitialize (privileged=true, callback=callback@entry=0x7f91b0cd3ec0 <daemonInhibitCallback>, opaque=opaque@entry=0x7f91b1291910) at libvirt.c:777
#7  0x00007f91b0cd3f1b in daemonRunStateInit (opaque=0x7f91b1291910) at libvirtd.c:947
#8  0x00007f91b0008182 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#9  0x00007f91ad671dc5 in start_thread (arg=0x7f9193de3700) at pthread_create.c:308
#10 0x00007f91ad39eced in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Comment 5 SATHEESARAN 2016-07-15 08:49:25 UTC
	
Copied the contents from Global Events Tab
------------------------------------------


Jul 15, 2016 8:16:36 AM
	
VM appvm29 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm21 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm21 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm20 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm20 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm20 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm18 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm18 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm18 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm15 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm15 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm15 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm13 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm13 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:36 AM
	
VM appvm13 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm11 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm11 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm11 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm09 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm09 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm04 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm09 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm04 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
VM appvm04 was set to the Unknown status.
	
	
Jul 15, 2016 8:16:35 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm29'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:16:35 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:16:35 AM
	
VDSM host2 command failed: Message timeout which can be caused by communication issues
	
	
Jul 15, 2016 8:14:14 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm02'.
	
	
Jul 15, 2016 8:14:01 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm03'.
	
	
Jul 15, 2016 8:14:00 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm15'.
	
	
Jul 15, 2016 8:13:58 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm18'.
	
	
Jul 15, 2016 8:13:57 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm21'.
	
	
Jul 15, 2016 8:13:55 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm16'.
	
	
Jul 15, 2016 8:13:51 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm04'.
	
	
Jul 15, 2016 8:13:48 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm11'.
	
	
Jul 15, 2016 8:13:47 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm06'.
	
	
Jul 15, 2016 8:13:45 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm22'.
	
	
Jul 15, 2016 8:13:44 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm13'.
	
	
Jul 15, 2016 8:13:43 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm12'.
	
	
Jul 15, 2016 8:13:43 AM
	
Status of host host1 was set to Up.
	
	
Jul 15, 2016 8:13:42 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm09'.
	
	
Jul 15, 2016 8:13:42 AM
	
Manually synced the storage devices from host host1
	
	
Jul 15, 2016 8:13:41 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm10'.
	
	
Jul 15, 2016 8:13:40 AM
	
Failed to complete snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm20'.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm15'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm18'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm21'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm16'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Host host2 is not responding. Host cannot be fenced automatically because power management for the host is disabled.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm13'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm12'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm09'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm11'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm20'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Host host1 is not responding. Host cannot be fenced automatically because power management for the host is disabled.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm02'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm10'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm22'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:31 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm06'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm03'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
Failed to create live snapshot 'GLUSTER-Geo-rep-snapshot' for VM 'appvm04'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Vds timeout occured
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Message timeout which can be caused by communication issues
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host1 command failed: Message timeout which can be caused by communication issues
	
	
Jul 15, 2016 8:13:30 AM
	
VDSM host2 command failed: Message timeout which can be caused by communication issues
	
	
Jul 15, 2016 8:12:13 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm27' has been completed.
	
	
Jul 15, 2016 8:12:05 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm26' has been completed.
	
	
Jul 15, 2016 8:11:52 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm25' has been completed.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm25 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm03 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm26 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm16 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm02 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm10 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm27 is not responding.
	
	
Jul 15, 2016 8:11:42 AM
	
VM appvm22 is not responding.
	
	
Jul 15, 2016 8:11:41 AM
	
VM appvm12 is not responding.
	
	
Jul 15, 2016 8:11:41 AM
	
VM appvm06 is not responding.
	
	
Jul 15, 2016 8:11:21 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm29' was initiated by admin@internal.
	
	
Jul 15, 2016 8:11:18 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm28' was initiated by admin@internal.
	
	
Jul 15, 2016 8:11:15 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm27' was initiated by admin@internal.
	
	
Jul 15, 2016 8:11:11 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm26' was initiated by admin@internal.
	
	
Jul 15, 2016 8:11:08 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm25' was initiated by admin@internal.
	
	
Jul 15, 2016 8:11:05 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm24' was initiated by admin@internal.
	
	
Jul 15, 2016 8:11:03 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm23' was initiated by admin@internal.
	
	
Jul 15, 2016 8:10:58 AM
	
Snapshot 'GLUSTER-Geo-rep-snapshot' creation for VM 'appvm22' was initiated by admin@internal.

Comment 6 SATHEESARAN 2016-07-15 08:50:20 UTC
The actual issue started after July 15, 08.10 AM
Please refer to the logs after this timestamp

Comment 7 SATHEESARAN 2016-07-15 09:10:18 UTC
Created attachment 1180075 [details]
libvirtd coredump

Comment 8 SATHEESARAN 2016-07-15 09:13:59 UTC
Created attachment 1180076 [details]
vdsm.log from host2

Comment 9 SATHEESARAN 2016-07-15 09:14:46 UTC
Created attachment 1180077 [details]
vdsm.log from host3

Comment 10 SATHEESARAN 2016-07-15 09:15:57 UTC
Created attachment 1180079 [details]
engine.log from hosted engine

Comment 11 SATHEESARAN 2016-07-15 09:17:24 UTC
(In reply to SATHEESARAN from comment #6)
> The actual issue started after July 15, 08.10 AM
> Please refer to the logs after this timestamp

July 15, 08.10 AM IST ( approx )
and
July 14, 24.01 PM EDT ( approx )

Comment 13 Tomas Jelinek 2016-07-18 11:11:54 UTC
(In reply to SATHEESARAN from comment #1)
> Backtrace from the libvirtd coredump
> -------------------------------------
> 
> Reading symbols from /usr/sbin/libvirtd...Reading symbols from
> /usr/lib/debug/usr/sbin/libvirtd.debug...done.
> done.
> 
> warning: .dynamic section for "/usr/lib64/libsystemd.so.0.6.0" is not at the
> expected address (wrong library or version mismatch?)
> 
> warning: Can't read pathname for load map: Input/output error.
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/sbin/libvirtd --listen'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x00007f91affedafa in virNumaGetMaxCPUs () at util/virnuma.c:378
> 378	    return NUMA_MAX_N_CPUS;
> Missing separate debuginfos, use: debuginfo-install
> sanlock-lib-3.2.4-2.el7_2.x86_64
> (gdb) bt
> #0  0x00007f91affedafa in virNumaGetMaxCPUs () at util/virnuma.c:378
> #1  0x00007f91affedb41 in virNumaGetNodeCPUs (node=node@entry=0,
> cpus=cpus@entry=0x7f9193de2610) at util/virnuma.c:259
> #2  0x00007f91b0093baf in nodeCapsInitNUMA
> (sysfs_prefix=sysfs_prefix@entry=0x0, caps=caps@entry=0x7f918c10c550) at
> nodeinfo.c:2122
> #3  0x00007f919741a5d6 in virQEMUCapsInit (cache=0x7f918c146df0) at
> qemu/qemu_capabilities.c:1058
> #4  0x00007f9197454040 in virQEMUDriverCreateCapabilities
> (driver=driver@entry=0x7f918c20a770) at qemu/qemu_conf.c:903
> #5  0x00007f9197496121 in qemuStateInitialize (privileged=true,
> callback=<optimized out>, opaque=<optimized out>) at qemu/qemu_driver.c:862
> #6  0x00007f91b0095ddf in virStateInitialize (privileged=true,
> callback=callback@entry=0x7f91b0cd3ec0 <daemonInhibitCallback>,
> opaque=opaque@entry=0x7f91b1291910) at libvirt.c:777
> #7  0x00007f91b0cd3f1b in daemonRunStateInit (opaque=0x7f91b1291910) at
> libvirtd.c:947
> #8  0x00007f91b0008182 in virThreadHelper (data=<optimized out>) at
> util/virthread.c:206
> #9  0x00007f91ad671dc5 in start_thread (arg=0x7f9193de3700) at
> pthread_create.c:308
> #10 0x00007f91ad39eced in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

@Jiri: this seems a lot like a libvirt bug - is it a known issue?

Comment 14 Jiri Denemark 2016-07-18 15:45:46 UTC
No, I'm not aware of an existing bug in this area, you likely hit something new...

Comment 16 Tomas Jelinek 2016-07-19 06:21:18 UTC
ok, moved to libvirt

Comment 17 Martin Kletzander 2016-07-27 13:13:55 UTC
Either the backtrace from the coredump is incorrect or it's a bug in libnuma (numactl) code.  That's because NUMA_MAX_N_CPUS is defined as (numa_all_cpus_ptr->size) which would mean that the pointer is NULL.  And that can't be true because libnuma would exit() if it failed the allocation.

Would you mind reproducing and then capturing full backtrace right away (that is "thread apply all bt full" command in gdb) as well as printing the value of numa_all_cpus_ptr just in case that backtrace looks similar?  Thanks a lot in advance.

Comment 19 Martin Kletzander 2017-01-25 09:50:13 UTC
Closing due to not enough information, if the bug persists, please create new BZ with all requested information already attached.

Comment 20 SATHEESARAN 2019-04-27 02:28:04 UTC
We are no longer seeing this problem with the latest RHV 4.3 and RHGS 3.4.4


Note You need to log in before you can comment on or make changes to this bug.