Description of problem: Blockcommit hang with unresponsive NFS storage which also blocks other virsh commands. Version-Release number of selected component (if applicable): libvirt-1.3.4-1.el7.x86_64 qemu-kvm-rhev-2.6.0-4.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1.Soft mount a NFS storage #mount|grep nfs $IP:/mnt/img on /tmp/zp type nfs4 (rw,relatime,vers=4.0,soft,proto=tcp,......) 2. start a guest like following : all the images and backing files are on NFS storage. <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/tmp/zp/snapshot/d/d' startupPolicy='optional'> <seclabel model='selinux' labelskip='yes'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/tmp/zp/snapshot/d/../c/c'/> <backingStore type='file' index='2'> <format type='qcow2'/> <source file='/tmp/zp/snapshot/d/../c/../b/b'/> <backingStore type='file' index='3'> <format type='qcow2'/> <source file='/tmp/zp/snapshot/d/../c/../b/../a/a'/> <backingStore/> </backingStore> </backingStore> </backingStore> <target dev='vda' bus='virtio'/> <serial>aca38821-c430-48a1-a932-a4814198f24d</serial> <boot order='1'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> 3. disconnect NFS storage. # iptables -A OUTPUT -d $IP -p tcp --dport 2049 -j DROP 4. do blockcommit, it will hang # virsh -k0 blockcommit --active --verbose vm2 vda --shallow --pivot --keep-relative ...... 5. in termianl 2, virsh list guest also hang #virsh list ... 6. in termianl 3, execute other virsh commands, it looks like all virsh command would hang. 7. recover NFS storage and commit again, successfully. # virsh -k0 blockcommit --active --verbose vm2 vda --shallow --pivot --keep-relative Block commit: [100 %] Successfully pivoted Actual results: As step 4, step 5, for unresposive NFS storage, blockcommit will hang, and also it will block other commands. Expected results: These commands won't hang. Additional info:
This bug is going to be addressed in next major release.
This bug was closed deferred as a result of bug triage. Please reopen if you disagree and provide justification why this bug should get enough priority. Most important would be information about impact on customer or layered product. Please indicate requested target release.