Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1496768 - pool-refresh hangs if NFS/iscsi storage is offline, blocking all pool operations
Summary: pool-refresh hangs if NFS/iscsi storage is offline, blocking all pool operations
Status: NEW
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: x86_64
OS: Linux
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact:
Depends On:
TreeView+ depends on / blocked
Reported: 2017-09-28 10:47 UTC by Dmitriy S
Modified: 2018-07-18 15:32 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed:

Attachments (Terms of Use)
libvirtd.log (41.63 KB, text/plain)
2017-09-28 10:47 UTC, Dmitriy S
no flags Details

Description Dmitriy S 2017-09-28 10:47:22 UTC
Created attachment 1331921 [details]

Description of problem:
Libvirt hang if the pool defined in it is not active. It was checked on iSCSI and NFS storages. GDB hang too, if you try to connect to the process.

Version-Release number of selected component (if applicable):
libvirtd --version
libvirtd (libvirt) 3.2.0
virsh -v 3.2.0
Reproduced in the same way on version 3.7.0

Steps to Reproduce:
1. virsh pool-create define
<pool type='netfs'>
  <capacity unit='bytes'>24897585152</capacity>
  <allocation unit='bytes'>1155137536</allocation>
  <available unit='bytes'>23742447616</available>
    <host name=''/>
    <dir path='/nfs-pool'/>
    <format type='auto'/>
2. Turn off the physical machine on which the storage is located.
3. virsh pool-refresh NFS

Actual results:
After that, any actions with pools in the virsh are not available. Other actions like virsh list is available.

Additional info:
After the storage is turned on, the libvirtd continues its normal operation
Libvirtd log at the time of the problem is attached.

Note You need to log in before you can comment on or make changes to this bug.