Bug 510450 - Command vol-delete causes 'failed to connect to the hypervisor'
Command vol-delete causes 'failed to connect to the hypervisor'
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: libvirt (Show other bugs)
5.4
All Linux
low Severity low
: rc
: ---
Assigned To: Dave Allan
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-07-09 05:47 EDT by xingzhao
Modified: 2016-04-26 09:47 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-03-30 04:10:47 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Patch fixing the issue (758 bytes, patch)
2009-12-23 09:19 EST, Daniel Veillard
no flags Details | Diff


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2010:0205 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2010-03-29 08:27:37 EDT

  None (edit)
Description xingzhao 2009-07-09 05:47:17 EDT
Description of problem:
Vol vol-test can be deleted,but need to restart libvirtd 

Version-Release number of selected component (if applicable):
libvirt-0.6.3-14.el5
kvm-83-84.el5
rhel5.4:2.6.18-155.el5

How reproducible:
100%

Steps to Reproduce:
1. # virsh vol-delete --pool pool-mig vol-test
Vol vol-test deleted
2. # virsh vol-list pool-mig 
error:unable to connect to 'var/run/libvirt/libvirt-sock':Connection refused
error:failed to connect to the hypervisor
3. #service libvirtd restart
4. # virsh vol-list pool-mig 
..............
   RHEL5.3-64-virtio.raw /var/lib/libvirt/images/RHEL5.3-64-virtio.raw      
win2003-32-virtio.qcow2 /var/lib/libvirt/images/win2003-32-virtio.qcow2
win2003-32-virtio.raw /var/lib/libvirt/images/win2003-32-virtio.raw
.......... 

Expected results:
List all vol by pool-mig, vol-test is removed
Comment 1 Dave Allan 2009-12-10 15:28:24 EST
I've tried to reproduce this bug both on RHEL5.4 and on libvirt-0.6.3-24, and everything is working correctly.  Can you provide the pool xml and the volume xml that you're using?
Comment 2 Dave Allan 2009-12-10 15:30:05 EST
Sorry that should be I tried to reproduce it both on the libvirt.org git head and on RHEL5.4 with libvirt-0.6.3-24 and it worked both times.
Comment 3 Alex Jia 2009-12-17 02:21:07 EST
[root@dhcp-66-70-91 libvirt]# virsh pool-create pool-dir.xml 
Pool dirpool created from pool-dir.xml

[root@dhcp-66-70-91 libvirt]# virsh pool-list
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
dirpool              active     no        

[root@dhcp-66-70-91 libvirt]# virsh vol-create dirpool vol.xml 
Vol virtimage created from vol.xml

[root@dhcp-66-70-91 libvirt]# virsh vol-list dirpool
Name                 Path                                    
-----------------------------------------
gpxe.img             /var/lib/libvirt/images/gpxe.img        
virtimage            /var/lib/libvirt/images/virtimage       

[root@dhcp-66-70-91 libvirt]# service libvirtd status
libvirtd (pid  22256) is running...

[root@dhcp-66-70-91 libvirt]# virsh vol-delete --pool dirpool virtimage
error: Failed to delete vol virtimage
error: server closed connection

[root@dhcp-66-70-91 libvirt]# service libvirtd status
libvirtd dead but pid file exists

sometime, the result is right,but after issuing vol-list: 
[root@dhcp-66-70-91 libvirt]# virsh vol-delete --pool dirpool virtimage
Vol virtimage deleted

[root@dhcp-66-70-91 libvirt]# virsh vol-list dirpool
error: failed to get pool 'dirpool'
error: this function is not supported by the hypervisor: virStoragePoolLookupByName

[root@dhcp-66-70-91 libvirt]# service libvirtd status
libvirtd dead but pid file exists





Debug details:
1.turn on LIBVIRT_DEBUG
[root@dhcp-66-70-91 libvirt]# export LIBVIRT_DEBUG=1
[root@dhcp-66-70-91 libvirt]# virsh vol-delete --pool dirpool virtimage
15:01:32.032: debug : virInitialize:290 : register drivers
15:01:32.032: debug : virRegisterDriver:667 : registering Test as driver 0
15:01:32.032: debug : virRegisterNetworkDriver:567 : registering Test as network driver 0
15:01:32.032: debug : virRegisterStorageDriver:598 : registering Test as storage driver 0
15:01:32.032: debug : virRegisterDeviceMonitor:629 : registering Test as device driver 0
15:01:32.033: debug : xenHypervisorInit:1922 : Using new hypervisor call: 30001

15:01:32.033: debug : xenHypervisorInit:1991 : Using hypervisor call v2, sys ver3 dom ver5

15:01:32.033: debug : virRegisterDriver:667 : registering Xen as driver 1
15:01:32.033: debug : virRegisterDriver:667 : registering remote as driver 2
15:01:32.033: debug : virRegisterNetworkDriver:567 : registering remote as network driver 1
15:01:32.033: debug : virRegisterStorageDriver:598 : registering remote as storage driver 1
15:01:32.033: debug : virRegisterDeviceMonitor:629 : registering remote as device driver 1
15:01:32.033: debug : virConnectOpenAuth:1100 : name=(null), auth=0x2b6ea1fb9340, flags=0
15:01:32.033: debug : do_open:922 : no name, allowing driver auto-select
15:01:32.033: debug : do_open:930 : trying driver 0 (Test) ...
15:01:32.033: debug : do_open:936 : driver 0 Test returned DECLINED
15:01:32.033: debug : do_open:930 : trying driver 1 (Xen) ...
15:01:32.033: debug : xenUnifiedOpen:295 : Trying hypervisor sub-driver
15:01:32.033: debug : xenUnifiedOpen:298 : Activated hypervisor sub-driver
15:01:32.033: debug : xenUnifiedOpen:306 : Trying XenD sub-driver
15:01:32.034: debug : xenUnifiedOpen:309 : Activated XenD sub-driver
15:01:32.034: debug : xenUnifiedOpen:315 : Trying XM sub-driver
15:01:32.034: debug : xenUnifiedOpen:318 : Activated XM sub-driver
15:01:32.034: debug : xenUnifiedOpen:322 : Trying XS sub-driver
15:01:32.034: debug : xenStoreOpen:346 : Failed to add event handle, disabling events

15:01:32.034: debug : xenUnifiedOpen:325 : Activated XS sub-driver
15:01:32.041: debug : xenUnifiedOpen:361 : Trying Xen inotify sub-driver
15:01:32.041: debug : xenInotifyOpen:439 : Adding a watch on /etc/xen
15:01:32.041: debug : xenInotifyOpen:451 : Building initial config cache
15:01:32.041: debug : xenXMConfigCacheAddFile:400 : Adding file /etc/xen/gpxe
15:01:32.042: debug : xenXMConfigCacheAddFile:478 : Added config gpxe /etc/xen/gpxe
15:01:32.042: debug : xenInotifyOpen:458 : Registering with event loop
15:01:32.042: debug : xenInotifyOpen:462 : Failed to add inotify handle, disabling events
15:01:32.042: debug : virConnectRef:1165 : conn=0xf0e9f20 refs=3
15:01:32.042: debug : xenUnifiedOpen:364 : Activated Xen inotify sub-driver
15:01:32.042: debug : do_open:936 : driver 1 Xen returned SUCCESS
15:01:32.042: debug : do_open:956 : network driver 0 Test returned DECLINED
15:01:32.042: debug : doRemoteOpen:513 : proceeding with name = xen:///
15:01:32.042: debug : call:6555 : Doing call 66 (nil)
15:01:32.042: debug : call:6625 : We have the buck 66 0x2aaaaaaef010 0x2aaaaaaef010
15:01:32.042: debug : processCallRecvLen:6213 : Got length, now need 36 total (32 more)
15:01:32.042: debug : processCalls:6481 : Giving up the buck 66 0x2aaaaaaef010 (nil)
15:01:32.042: debug : call:6656 : All done with our call 66 (nil) 0x2aaaaaaef010
15:01:32.042: debug : call:6555 : Doing call 1 (nil)
15:01:32.042: debug : call:6625 : We have the buck 1 0xf1142a0 0xf1142a0
15:01:32.058: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
15:01:32.058: debug : processCalls:6481 : Giving up the buck 1 0xf1142a0 (nil)
15:01:32.058: debug : call:6656 : All done with our call 1 (nil) 0xf1142a0
15:01:32.058: debug : doRemoteOpen:824 : Adding Handler for remote events
15:01:32.058: debug : doRemoteOpen:831 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
15:01:32.058: debug : do_open:956 : network driver 1 remote returned SUCCESS
15:01:32.058: debug : do_open:978 : storage driver 0 Test returned DECLINED
15:01:32.058: debug : do_open:978 : storage driver 1 remote returned SUCCESS
15:01:32.058: debug : do_open:999 : node driver 0 Test returned DECLINED
15:01:32.058: debug : do_open:999 : node driver 1 remote returned SUCCESS
15:01:32.058: debug : virStoragePoolLookupByName:5581 : conn=0xf0e9f20, name=dirpool
15:01:32.058: debug : call:6555 : Doing call 84 (nil)
15:01:32.058: debug : call:6625 : We have the buck 84 0xf1142a0 0xf1142a0
15:01:32.058: debug : processCallRecvLen:6213 : Got length, now need 56 total (52 more)
15:01:32.058: debug : processCalls:6481 : Giving up the buck 84 0xf1142a0 (nil)
15:01:32.058: debug : call:6656 : All done with our call 84 (nil) 0xf1142a0
15:01:32.058: debug : virStorageVolLookupByName:6581 : pool=0xf10af30, name=virtimage
15:01:32.058: debug : call:6555 : Doing call 95 (nil)
15:01:32.058: debug : call:6625 : We have the buck 95 0xf1142a0 0xf1142a0
15:01:32.058: debug : processCallRecvLen:6213 : Got length, now need 96 total (92 more)
15:01:32.058: debug : processCalls:6481 : Giving up the buck 95 0xf1142a0 (nil)
15:01:32.058: debug : call:6656 : All done with our call 95 (nil) 0xf1142a0
15:01:32.059: debug : virStoragePoolFree:6070 : pool=0xf10af30
15:01:32.059: debug : virUnrefStoragePool:609 : unref pool 0xf10af30 dirpool 1
15:01:32.059: debug : virReleaseStoragePool:568 : release pool 0xf10af30 dirpool
15:01:32.059: debug : virReleaseStoragePool:579 : unref connection 0xf0e9f20 6
15:01:32.059: debug : virStorageVolDelete:6810 : vol=0xf10b240, flags=0
15:01:32.059: debug : call:6555 : Doing call 94 (nil)
15:01:32.059: debug : call:6625 : We have the buck 94 0xf1142a0 0xf1142a0
15:01:32.083: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
15:01:32.083: debug : processCalls:6481 : Giving up the buck 94 0xf1142a0 (nil)
15:01:32.083: debug : call:6656 : All done with our call 94 (nil) 0xf1142a0
Vol virtimage deleted

15:01:32.084: debug : virConnectClose:1118 : conn=0xf0e9f20
15:01:32.084: debug : call:6555 : Doing call 2 (nil)
15:01:32.084: debug : call:6625 : We have the buck 2 0xf1142a0 0xf1142a0
15:01:32.084: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
15:01:32.084: debug : processCalls:6481 : Giving up the buck 2 0xf1142a0 (nil)
15:01:32.084: debug : call:6656 : All done with our call 2 (nil) 0xf1142a0
15:01:32.084: debug : virUnrefConnect:210 : unref connection 0xf0e9f20 5
15:01:32.084: debug : virUnrefConnect:210 : unref connection 0xf0e9f20 4
15:01:32.085: debug : virUnrefConnect:210 : unref connection 0xf0e9f20 3
15:01:32.085: debug : virUnrefConnect:210 : unref connection 0xf0e9f20 2
[root@dhcp-66-70-91 libvirt]# service libvirtd status
libvirtd dead but pid file exists
Comment 4 Alex Jia 2009-12-17 02:21:58 EST
Additional information:

[root@dhcp-66-70-91 libvirt]# uname -r
2.6.18-164.el5xen
[root@dhcp-66-70-91 libvirt]# uname -a
Linux dhcp-66-70-91.nay.redhat.com 2.6.18-164.el5xen #1 SMP Tue Aug 18 15:59:52 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@dhcp-66-70-91 libvirt]# rpm -qa|grep libvirt
libvirt-debuginfo-0.6.3-25.el5
libvirt-0.6.3-25.el5
libvirt-python-0.6.3-25.el5
Comment 5 Dave Allan 2009-12-18 15:24:20 EST
Can you please provide the pool and volume xml that I requested?  Thanks.
Comment 6 Alex Jia 2009-12-20 21:21:33 EST
Sorry,I forget attaching the pool and volume xml.

pool-dir.xml:
<pool type="dir">
  <name>dirpool</name>
  <target>
    <path>/var/lib/libvirt/images</path>
  </target>
</pool>


vol.xml:
<volume>
  <name>virtimage</name>
  <key>/var/lib/libvirt/images/virtimage</key>
  <source>
  </source>
  <capacity>10737418240</capacity>
  <allocation>2210500608</allocation>
  <target>
    <path>/var/lib/libvirt/images/virtimage</path>
    <format type='raw'/>
    <permissions>
      <mode>0600</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:nfs_t</label>
    </permissions>
  </target>
</volume>
Comment 7 Daniel Veillard 2009-12-23 09:19:40 EST
Created attachment 380031 [details]
Patch fixing the issue
Comment 9 Daniel Veillard 2009-12-23 09:24:20 EST
libvirt-0.6.3-28.el5 has been built in dist-5E-qu-candidate with the fix,
Daniel
Comment 11 Alex Jia 2009-12-29 04:27:33 EST
This bug has been verified with libvirt 0.6.3-25.el5 on RHEL-5.5. Already
fixed,set status to VERIFIED.

Version-Release number of selected component (if applicable):
[root@dhcp-66-70-62 ~]# uname -a
Linux dhcp-66-70-62.nay.redhat.com 2.6.18-183.el5xen #1 SMP Mon Dec 21 18:46:14 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@dhcp-66-70-62 ~]# rpm -qa|grep libvirt
libvirt-0.6.3-28.el5
libvirt-python-0.6.3-28.el5
libvirt-debuginfo-0.6.3-28.el5
[root@dhcp-66-70-62 ~]# rpm -qa|grep xen
xen-libs-3.0.3-102.el5
xen-devel-3.0.3-102.el5
kmod-gnbd-xen-0.1.5-2.el5
kmod-gfs-xen-0.1.34-9.el5
xen-3.0.3-102.el5
xen-libs-3.0.3-102.el5
kernel-xen-2.6.18-183.el5
kmod-cmirror-xen-0.1.22-3.el5


Steps to Reproduce:
[root@dhcp-66-70-62 libvirt]# cat dirpool.xml
<pool type='dir'>
  <name>dirpool</name>
  <uuid>4bddc9df-0403-73b5-9fb9-fa980cd4d741</uuid>
  <capacity>40618147840</capacity>
  <allocation>12185968640</allocation>
  <available>28432179200</available>
  <source>
  </source>
  <target>
    <path>/var/lib/xen/images</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

[root@dhcp-66-70-62 libvirt]# virsh pool-create dirpool.xml
Pool dirpool created from dirpool.xml

[root@dhcp-66-70-62 libvirt]# virsh pool-list --all
Name                 State      Autostart
-----------------------------------------
dirpool              active     no

[root@dhcp-66-70-62 libvirt]# cat vol.xml
<volume>
  <name>virtimage</name>
  <key>/var/lib/xen/images/virtimage</key>
  <source>
  </source>
  <capacity>10737418240</capacity>
  <allocation>2210500608</allocation>
  <target>
    <path>/var/lib/xen/images/virtimage</path>
    <format type='raw'/>
    <permissions>
      <mode>0600</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:nfs_t</label>
    </permissions>
  </target>
</volume>  

[root@dhcp-66-70-62 libvirt]# virsh vol-create dirpool vol.xml 
Vol virtimage created from vol.xml

[root@dhcp-66-70-62 libvirt]# virsh vol-list dirpool
Name                 Path
-----------------------------------------
rhel5u5_x86_64_xenfv.img /var/lib/xen/images/rhel5u5_x86_64_xenfv.img
rhel5u5_x86_64_xenpv.img /var/lib/xen/images/rhel5u5_x86_64_xenpv.img
virtimage            /var/lib/xen/images/virtimage

[root@dhcp-66-70-62 libvirt]# virsh vol-delete --pool dirpool virtimage
Vol virtimage deleted

[root@dhcp-66-70-62 libvirt]# service libvirtd status
libvirtd (pid  6189) is running...
Comment 17 errata-xmlrpc 2010-03-30 04:10:47 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2010-0205.html

Note You need to log in before you can comment on or make changes to this bug.