RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1496517 - Calling virDomain.getMemoryStats and virDomain.diskErrors may block the entire process
Summary: Calling virDomain.getMemoryStats and virDomain.diskErrors may block the entir...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt-python
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 7.5
Assignee: Pavel Hrdina
QA Contact: lcheng
URL:
Whiteboard:
Depends On:
Blocks: 1481022 1497195 1497197
TreeView+ depends on / blocked
 
Reported: 2017-09-27 16:03 UTC by Nir Soffer
Modified: 2018-04-10 11:08 UTC (History)
17 users (show)

Fixed In Version: libvirt-python-3.8.0-1.el7
Doc Type: Bug Fix
Doc Text:
When the storage of a guest hosted on a Red Hat Virtualization Hypervisor system became inaccessible, the host in some cases became unresponsive. This update fixes the behavior of the virDomainMemoryStats() function, which prevents the described problem from occurring.
Clone Of:
: 1497195 1497197 (view as bug list)
Environment:
Last Closed: 2018-04-10 11:07:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
test scripts (1.10 KB, text/plain)
2017-10-24 11:22 UTC, lcheng
no flags Details
test xml (3.28 KB, text/plain)
2017-10-24 11:22 UTC, lcheng
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0707 0 None None None 2018-04-10 11:08:26 UTC

Description Nir Soffer 2017-09-27 16:03:46 UTC
Description of problem:

If libvirt is blocked because of inaccessible storage, (e.g. in 
virConnectGetAllDomainStats), calling virDomain.getMemoryStats or
virDomain.diskErrors may block the entire process for 30 seconds.

The issue is not releasing the GIL when calling virDomainMemoryStats() and
virDomainGetDiskErrors().

Version-Release number of selected component (if applicable):
- virDomainMemoryStats issue exists since it was introduced in 2009
  commit d8d01cc94ce4a2955ef272319429dfd5a3e9df01
- virDomainGetDiskErrors issue exists since it was introduced in 2012
  commit 340ef52421ef1ac903a97b64ab0572da05175970

How reproducible:
Always

Steps to Reproduce:
1. Setup oVirt 4.2
2. Start vm connected to a cd on iso domain
3. Block access to iso domain (e.g. using iptables)
4. In vdsm log we can see that the process freeze for about 30 seconds every
   minute or so.

Actual results:
Blocking cause loss of heartbeats with oVirt engine, host becomes non-responsive,
system never recover.

Expected results:
Only the vm connected to faulty storage is affected, other vms or subsystems
on this host are not affected by the faulty storage.


More info:

Vdsm is calling virConnect.getAllDomainStats every 15 seconds. This call is
blocked for 30 seconds when an iso domain is not accessible, and a vm has a cd
on that iso domain.

Vdsm is calling virDomain.getMemoryStats on another thread every 15 seconds. We
see in vdsm log that vdsm is freezed for 30 seconds after this call.

Example vdsm log show 30 seconds without any log (impossible in debug level)

2017-09-26 17:05:42,713+0300 DEBUG (periodic/2) [storage.TaskManager.Task] (Task='9280fb0f-eda3-4838-bb06-c0b7ca151ccb') ref 0 aborting False (task:999)
2017-09-26 17:06:12,726+0300 DEBUG (monitor/410c3f4) [storage.oop] Removing idle ioprocess Global (outOfProcess:76)


Attaching to vdsm show two libvirt calls waiting:

Thread 17 (Thread 0x7f94a77f6700 (LWP 6775)):
#4 Waiting for a lock (e.g. GIL)
#8 Frame 0x7f94b8017320, for file /usr/lib64/python2.7/site-packages/libvirt.py, line 5239, in getAllDomainStats (self=<virConnect(restore=<function at remote 0x3987848>, nodeDeviceEventRegisterAny=<function at remote 0x39855f0>, nodeDeviceLookupSCSIHostByWWN=<function at remote 0x39857d0>, listAllSecrets=<function at remote 0x3982b90>, saveImageGetXMLDesc=<function at remote 0x3987b18>, virConnGetLastError=<function at remote 0x3989e60>, listDomainsID=<function at remote 0x39842a8>, storagePoolLookupByUUIDString=<function at remote 0x39899b0>, domainEventCallbackID={0: 0, 1: 1, 2: 2, 3: 6, 4: 5, 5: 8, 6: 3, 7: 21, 8: 15}, getCPUMap=<function at remote 0x3789e60>, setKeepAlive=<function at remote 0x3989230>, pingLibvirt=<instancemethod at remote 0x366d820>, changeBegin=<function at remote 0x3788c08>, _o=<PyCapsule at remote 0x2c896f0>, saveImageDefineXML=<function at remote 0x3987a28>, listDefinedDomains=<function at remote 0x3982d70>, getCapabilities=<function at remote 0x39811b8>, getFreeMemory=<function at re...(truncated)
    ret = libvirtmod.virConnectGetAllDomainStats(self._o, stats, flags)

Thread 3 (Thread 0x7f94bd7fa700 (LWP 7037)):
#10 Frame 0x7f94cc0efc90, for file /usr/lib64/python2.7/site-packages/libvirt.py, line 1449, in memoryStats (self=<virDomain(addIOThread=<function at remote 0x2c1e8c0>, pinIOThread=<function at remote 0x2c316e0>, getTime=<function at remote 0x2c33938>, maxMemory=<function at remote 0x2c08b90>, setBlockThreshold=<function at remote 0x2c29ed8>, securityLabelList=<function at remote 0x2c29a28>, _conn=<virConnect(restore=<function at remote 0x3987848>, nodeDeviceEventRegisterAny=<function at remote 0x39855f0>, nodeDeviceLookupSCSIHostByWWN=<function at remote 0x39857d0>, listAllSecrets=<function at remote 0x3982b90>, saveImageGetXMLDesc=<function at remote 0x3987b18>, virConnGetLastError=<function at remote 0x3989e60>, listDomainsID=<function at remote 0x39842a8>, storagePoolLookupByUUIDString=<function at remote 0x39899b0>, domainEventCallbackID={0: 0, 1: 1, 2: 2, 3: 6, 4: 5, 5: 8, 6: 3, 7: 21, 8: 15}, getCPUMap=<function at remote 0x3789e60>, setKeepAlive=<function at remote 0x3989230>, pingLibvirt=<instancemethod ...(truncated)
    ret = libvirtmod.virDomainMemoryStats(self._o)

We are trying to reproduce this now on oVirt 4.1 (stable version).

I sent a patch for fixing virDomain.getMemoryStats:
https://www.redhat.com/archives/libvir-list/2017-September/msg00939.html

Comment 2 Peter Krempa 2017-09-28 06:54:51 UTC
Fixed upstream:

commit 7af7450b0ac6bd69181f3c8d2360f4f97478191c 
Author: Nir Soffer <nirsof>
Date:   Tue Sep 26 20:05:59 2017 +0300

    Release the GIL during virDomainGetMemoryStats & virDomainGetDiskErrors
    
    We discovered that the entire python process get stuck for about 30
    seconds when calling virDomain.getMemoryStats() if libvirt is stuck in
    virConnect.getAllDomainStats() on inaccessible storage. This blocking
    cause a horrible mess in oVirt.
    
    This patches adds the standard *_ALLOW_THREADS around the call to avoid
    this unwanted blocking.

Comment 13 lcheng 2017-10-24 11:16:04 UTC
According to bug 1497197 comment 5, I can reproduce this bug on libvirt-python-3.7.0-1.el7.x86_64. 

Reproduce steps:
1. prepare a image on nfs server and boot guest using this image in xml.
2. on console 1, run
# python ./test.py
3. wait about 30-40s, on console 2 run
# iptables -A OUTPUT -d [NFS_SERVER] -j DROP

For several min, the output for print will hang for 30s. Test log as follows:

*****************
44:print time: 2017-10-24 06:39:54.356051

45:print time: 2017-10-24 06:39:55.357165

start getAllDomainStats: 2017-10-24 06:39:55.557711

46:print time: 2017-10-24 06:39:56.358309

47:print time: 2017-10-24 06:39:57.359428

start memoryStats: 2017-10-24 06:39:58.356300    <--------------- here the time

libvirt: QEMU Driver error : Timed out during operation: cannot acquire state change lock (held by remoteDispatchConnectGetAllDomainStats)
48:print time: 2017-10-24 06:40:28.357638

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
    self.run()
  File "/usr/lib64/python2.7/threading.py", line 765, in run
    self.__target(*self.__args, **self.__kwargs)
  File "./test.py", line 19, in memory_stats
    dom.memoryStats()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1504, in memoryStats
    if ret is None: raise libvirtError ('virDomainMemoryStats() failed', dom=self)
libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchConnectGetAllDomainStats)

49:print time: 2017-10-24 06:40:29.358779    <----------------- here the time

50:print time: 2017-10-24 06:40:30.359898

51:print time: 2017-10-24 06:40:31.361016

**************

Verify on libvirt-python-3.8.0-1.el7.x86_64. The print will finished without hang, so move to VERIFIED.

Comment 14 lcheng 2017-10-24 11:22:04 UTC
Created attachment 1342692 [details]
test scripts

Comment 15 lcheng 2017-10-24 11:22:35 UTC
Created attachment 1342693 [details]
test xml

Comment 18 errata-xmlrpc 2018-04-10 11:07:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0707


Note You need to log in before you can comment on or make changes to this bug.