Bug 509293 - virsh vol-key volname "Segmentation fault"
Summary: virsh vol-key volname "Segmentation fault"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: libvirt
Version: 5.4
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Laine Stump
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-07-02 03:37 UTC by Alex Jia
Modified: 2010-03-30 08:11 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-03-30 08:11:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
patch to eliminate the segfault (564 bytes, patch)
2009-12-09 23:11 UTC, Laine Stump
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2010:0205 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2010-03-29 12:27:37 UTC

Description Alex Jia 2009-07-02 03:37:43 UTC
Description of problem:


Version-Release number of selected component (if applicable):
[root@dhcp-66-70-18 ~]# uname -a
Linux dhcp-66-70-18.nay.redhat.com 2.6.18-156.el5xen #1 SMP Mon Jun 29 18:24:43 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@dhcp-66-70-18 ~]# rpm -qa|grep libvirt
libvirt-0.6.3-13.el5
libvirt-debuginfo-0.6.3-12.el5
libvirt-python-0.6.3-13.el5
libvirt-devel-0.6.3-13.el5


How reproducible:
create a pool and create volume in the pool,then running virsh commands "virsh vol-key volname"

Steps to Reproduce:
1.virsh pool-define pool.xml
2.virsh pool-start poolname
3.virsh vol-create poolname vol.xml
4.virsh vol-key volname
  
Actual results:
Segmentation fault

Expected results:
fixed it

Additional info:

[root@dhcp-66-70-18 ~]# export LIBVIRT_DEBUG=1
[root@dhcp-66-70-18 ~]# virsh pool-list
11:29:17.262: debug : virInitialize:290 : register drivers
11:29:17.263: debug : virRegisterDriver:667 : registering Test as driver 0
11:29:17.263: debug : virRegisterNetworkDriver:567 : registering Test as network driver 0
11:29:17.263: debug : virRegisterStorageDriver:598 : registering Test as storage driver 0
11:29:17.263: debug : virRegisterDeviceMonitor:629 : registering Test as device driver 0
11:29:17.263: debug : xenHypervisorInit:1922 : Using new hypervisor call: 30001

11:29:17.263: debug : xenHypervisorInit:1991 : Using hypervisor call v2, sys ver3 dom ver5

11:29:17.263: debug : virRegisterDriver:667 : registering Xen as driver 1
11:29:17.263: debug : virRegisterDriver:667 : registering remote as driver 2
11:29:17.263: debug : virRegisterNetworkDriver:567 : registering remote as network driver 1
11:29:17.263: debug : virRegisterStorageDriver:598 : registering remote as storage driver 1
11:29:17.263: debug : virRegisterDeviceMonitor:629 : registering remote as device driver 1
11:29:17.263: debug : virConnectOpenAuth:1100 : name=(null), auth=0x2ba85fd803c0, flags=0
11:29:17.263: debug : do_open:922 : no name, allowing driver auto-select
11:29:17.264: debug : do_open:930 : trying driver 0 (Test) ...
11:29:17.264: debug : do_open:936 : driver 0 Test returned DECLINED
11:29:17.264: debug : do_open:930 : trying driver 1 (Xen) ...
11:29:17.264: debug : xenUnifiedOpen:295 : Trying hypervisor sub-driver
11:29:17.264: debug : xenUnifiedOpen:298 : Activated hypervisor sub-driver
11:29:17.264: debug : xenUnifiedOpen:306 : Trying XenD sub-driver
11:29:17.265: debug : xenUnifiedOpen:309 : Activated XenD sub-driver
11:29:17.265: debug : xenUnifiedOpen:315 : Trying XM sub-driver
11:29:17.265: debug : xenUnifiedOpen:318 : Activated XM sub-driver
11:29:17.265: debug : xenUnifiedOpen:322 : Trying XS sub-driver
11:29:17.265: debug : xenStoreOpen:346 : Failed to add event handle, disabling events

11:29:17.265: debug : xenUnifiedOpen:325 : Activated XS sub-driver
11:29:17.273: debug : xenUnifiedOpen:361 : Trying Xen inotify sub-driver
11:29:17.273: debug : xenInotifyOpen:439 : Adding a watch on /etc/xen
11:29:17.273: debug : xenInotifyOpen:451 : Building initial config cache
11:29:17.273: debug : xenInotifyOpen:458 : Registering with event loop
11:29:17.273: debug : xenInotifyOpen:462 : Failed to add inotify handle, disabling events
11:29:17.273: debug : virConnectRef:1165 : conn=0x412de10 refs=3
11:29:17.273: debug : xenUnifiedOpen:364 : Activated Xen inotify sub-driver
11:29:17.273: debug : do_open:936 : driver 1 Xen returned SUCCESS
11:29:17.273: debug : do_open:956 : network driver 0 Test returned DECLINED
11:29:17.273: debug : doRemoteOpen:513 : proceeding with name = xen:///
11:29:17.273: debug : call:6555 : Doing call 66 (nil)
11:29:17.273: debug : call:6625 : We have the buck 66 0x2aaaaaaee010 0x2aaaaaaee010
11:29:17.273: debug : processCallRecvLen:6213 : Got length, now need 36 total (32 more)
11:29:17.273: debug : processCalls:6481 : Giving up the buck 66 0x2aaaaaaee010 (nil)
11:29:17.273: debug : call:6656 : All done with our call 66 (nil) 0x2aaaaaaee010
11:29:17.273: debug : call:6555 : Doing call 1 (nil)
11:29:17.273: debug : call:6625 : We have the buck 1 0x414eef0 0x414eef0
11:29:17.289: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
11:29:17.289: debug : processCalls:6481 : Giving up the buck 1 0x414eef0 (nil)
11:29:17.289: debug : call:6656 : All done with our call 1 (nil) 0x414eef0
11:29:17.289: debug : doRemoteOpen:824 : Adding Handler for remote events
11:29:17.289: debug : doRemoteOpen:831 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
11:29:17.289: debug : do_open:956 : network driver 1 remote returned SUCCESS
11:29:17.289: debug : do_open:978 : storage driver 0 Test returned DECLINED
11:29:17.289: debug : do_open:978 : storage driver 1 remote returned SUCCESS
11:29:17.289: debug : do_open:999 : node driver 0 Test returned DECLINED
11:29:17.289: debug : do_open:999 : node driver 1 remote returned SUCCESS
11:29:17.289: debug : virConnectNumOfStoragePools:5322 : conn=0x412de10
11:29:17.289: debug : call:6555 : Doing call 71 (nil)
11:29:17.289: debug : call:6625 : We have the buck 71 0x414eef0 0x414eef0
11:29:17.289: debug : processCallRecvLen:6213 : Got length, now need 32 total (28 more)
11:29:17.289: debug : processCalls:6481 : Giving up the buck 71 0x414eef0 (nil)
11:29:17.289: debug : call:6656 : All done with our call 71 (nil) 0x414eef0
11:29:17.289: debug : virConnectListStoragePools:5364 : conn=0x412de10, names=0x414ce30, maxnames=2
11:29:17.289: debug : call:6555 : Doing call 72 (nil)
11:29:17.289: debug : call:6625 : We have the buck 72 0x414eef0 0x414eef0
11:29:17.290: debug : processCallRecvLen:6213 : Got length, now need 56 total (52 more)
11:29:17.290: debug : processCalls:6481 : Giving up the buck 72 0x414eef0 (nil)
11:29:17.290: debug : call:6656 : All done with our call 72 (nil) 0x414eef0
Name                 State      Autostart 
-----------------------------------------
11:29:17.290: debug : virStoragePoolLookupByName:5555 : conn=0x412de10, name=default
11:29:17.290: debug : call:6555 : Doing call 84 (nil)
11:29:17.290: debug : call:6625 : We have the buck 84 0x414fd50 0x414fd50
11:29:17.290: debug : processCallRecvLen:6213 : Got length, now need 56 total (52 more)
11:29:17.290: debug : processCalls:6481 : Giving up the buck 84 0x414fd50 (nil)
11:29:17.290: debug : call:6656 : All done with our call 84 (nil) 0x414fd50
11:29:17.290: debug : virStoragePoolGetAutostart:6349 : pool=0x414e510, autostart=0x7fffe13f9564
11:29:17.290: debug : call:6555 : Doing call 89 (nil)
11:29:17.290: debug : call:6625 : We have the buck 89 0x414fd50 0x414fd50
11:29:17.290: debug : processCallRecvLen:6213 : Got length, now need 32 total (28 more)
11:29:17.290: debug : processCalls:6481 : Giving up the buck 89 0x414fd50 (nil)
11:29:17.290: debug : call:6656 : All done with our call 89 (nil) 0x414fd50
11:29:17.290: debug : virStoragePoolGetName:6149 : pool=0x414e510
default              active     yes       
11:29:17.290: debug : virStoragePoolFree:6044 : pool=0x414e510
11:29:17.290: debug : virUnrefStoragePool:609 : unref pool 0x414e510 default 1
11:29:17.290: debug : virReleaseStoragePool:568 : release pool 0x414e510 default
11:29:17.290: debug : virReleaseStoragePool:579 : unref connection 0x412de10 5
11:29:17.290: debug : virStoragePoolLookupByName:5555 : conn=0x412de10, name=nfspool
11:29:17.290: debug : call:6555 : Doing call 84 (nil)
11:29:17.290: debug : call:6625 : We have the buck 84 0x414fd50 0x414fd50
11:29:17.290: debug : processCallRecvLen:6213 : Got length, now need 56 total (52 more)
11:29:17.290: debug : processCalls:6481 : Giving up the buck 84 0x414fd50 (nil)
11:29:17.290: debug : call:6656 : All done with our call 84 (nil) 0x414fd50
11:29:17.290: debug : virStoragePoolGetAutostart:6349 : pool=0x414e510, autostart=0x7fffe13f9564
11:29:17.290: debug : call:6555 : Doing call 89 (nil)
11:29:17.290: debug : call:6625 : We have the buck 89 0x414fd50 0x414fd50
11:29:17.290: debug : processCallRecvLen:6213 : Got length, now need 32 total (28 more)
11:29:17.290: debug : processCalls:6481 : Giving up the buck 89 0x414fd50 (nil)
11:29:17.290: debug : call:6656 : All done with our call 89 (nil) 0x414fd50
11:29:17.290: debug : virStoragePoolGetName:6149 : pool=0x414e510
nfspool              active     no        
11:29:17.290: debug : virStoragePoolFree:6044 : pool=0x414e510
11:29:17.290: debug : virUnrefStoragePool:609 : unref pool 0x414e510 nfspool 1
11:29:17.290: debug : virReleaseStoragePool:568 : release pool 0x414e510 nfspool
11:29:17.290: debug : virReleaseStoragePool:579 : unref connection 0x412de10 5

11:29:17.290: debug : virConnectClose:1118 : conn=0x412de10
11:29:17.290: debug : call:6555 : Doing call 2 (nil)
11:29:17.290: debug : call:6625 : We have the buck 2 0x414fd50 0x414fd50
11:29:17.291: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
11:29:17.291: debug : processCalls:6481 : Giving up the buck 2 0x414fd50 (nil)
11:29:17.291: debug : call:6656 : All done with our call 2 (nil) 0x414fd50
11:29:17.291: debug : virUnrefConnect:210 : unref connection 0x412de10 4
11:29:17.291: debug : virUnrefConnect:210 : unref connection 0x412de10 3
11:29:17.291: debug : virUnrefConnect:210 : unref connection 0x412de10 2
11:29:17.291: debug : virUnrefConnect:210 : unref connection 0x412de10 1
11:29:17.291: debug : virReleaseConnect:171 : release connection 0x412de10
[root@dhcp-66-70-18 ~]# virsh vol-list nfspool
11:30:08.702: debug : virInitialize:290 : register drivers
11:30:08.702: debug : virRegisterDriver:667 : registering Test as driver 0
11:30:08.702: debug : virRegisterNetworkDriver:567 : registering Test as network driver 0
11:30:08.702: debug : virRegisterStorageDriver:598 : registering Test as storage driver 0
11:30:08.702: debug : virRegisterDeviceMonitor:629 : registering Test as device driver 0
11:30:08.703: debug : xenHypervisorInit:1922 : Using new hypervisor call: 30001

11:30:08.703: debug : xenHypervisorInit:1991 : Using hypervisor call v2, sys ver3 dom ver5

11:30:08.703: debug : virRegisterDriver:667 : registering Xen as driver 1
11:30:08.703: debug : virRegisterDriver:667 : registering remote as driver 2
11:30:08.703: debug : virRegisterNetworkDriver:567 : registering remote as network driver 1
11:30:08.703: debug : virRegisterStorageDriver:598 : registering remote as storage driver 1
11:30:08.703: debug : virRegisterDeviceMonitor:629 : registering remote as device driver 1
11:30:08.703: debug : virConnectOpenAuth:1100 : name=(null), auth=0x2ba9a5dce3c0, flags=0
11:30:08.703: debug : do_open:922 : no name, allowing driver auto-select
11:30:08.703: debug : do_open:930 : trying driver 0 (Test) ...
11:30:08.703: debug : do_open:936 : driver 0 Test returned DECLINED
11:30:08.703: debug : do_open:930 : trying driver 1 (Xen) ...
11:30:08.703: debug : xenUnifiedOpen:295 : Trying hypervisor sub-driver
11:30:08.703: debug : xenUnifiedOpen:298 : Activated hypervisor sub-driver
11:30:08.703: debug : xenUnifiedOpen:306 : Trying XenD sub-driver
11:30:08.705: debug : xenUnifiedOpen:309 : Activated XenD sub-driver
11:30:08.705: debug : xenUnifiedOpen:315 : Trying XM sub-driver
11:30:08.705: debug : xenUnifiedOpen:318 : Activated XM sub-driver
11:30:08.705: debug : xenUnifiedOpen:322 : Trying XS sub-driver
11:30:08.705: debug : xenStoreOpen:346 : Failed to add event handle, disabling events

11:30:08.705: debug : xenUnifiedOpen:325 : Activated XS sub-driver
11:30:08.716: debug : xenUnifiedOpen:361 : Trying Xen inotify sub-driver
11:30:08.716: debug : xenInotifyOpen:439 : Adding a watch on /etc/xen
11:30:08.716: debug : xenInotifyOpen:451 : Building initial config cache
11:30:08.716: debug : xenInotifyOpen:458 : Registering with event loop
11:30:08.716: debug : xenInotifyOpen:462 : Failed to add inotify handle, disabling events
11:30:08.716: debug : virConnectRef:1165 : conn=0x142a6170 refs=3
11:30:08.716: debug : xenUnifiedOpen:364 : Activated Xen inotify sub-driver
11:30:08.716: debug : do_open:936 : driver 1 Xen returned SUCCESS
11:30:08.717: debug : do_open:956 : network driver 0 Test returned DECLINED
11:30:08.717: debug : doRemoteOpen:513 : proceeding with name = xen:///
11:30:08.717: debug : call:6555 : Doing call 66 (nil)
11:30:08.717: debug : call:6625 : We have the buck 66 0x2aaaaaaee010 0x2aaaaaaee010
11:30:08.717: debug : processCallRecvLen:6213 : Got length, now need 36 total (32 more)
11:30:08.717: debug : processCalls:6481 : Giving up the buck 66 0x2aaaaaaee010 (nil)
11:30:08.717: debug : call:6656 : All done with our call 66 (nil) 0x2aaaaaaee010
11:30:08.717: debug : call:6555 : Doing call 1 (nil)
11:30:08.717: debug : call:6625 : We have the buck 1 0x142c71e0 0x142c71e0
11:30:08.737: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
11:30:08.737: debug : processCalls:6481 : Giving up the buck 1 0x142c71e0 (nil)
11:30:08.737: debug : call:6656 : All done with our call 1 (nil) 0x142c71e0
11:30:08.737: debug : doRemoteOpen:824 : Adding Handler for remote events
11:30:08.737: debug : doRemoteOpen:831 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
11:30:08.737: debug : do_open:956 : network driver 1 remote returned SUCCESS
11:30:08.737: debug : do_open:978 : storage driver 0 Test returned DECLINED
11:30:08.737: debug : do_open:978 : storage driver 1 remote returned SUCCESS
11:30:08.737: debug : do_open:999 : node driver 0 Test returned DECLINED
11:30:08.737: debug : do_open:999 : node driver 1 remote returned SUCCESS
11:30:08.737: debug : virStoragePoolLookupByName:5555 : conn=0x142a6170, name=nfspool
11:30:08.737: debug : call:6555 : Doing call 84 (nil)
11:30:08.738: debug : call:6625 : We have the buck 84 0x142c71e0 0x142c71e0
11:30:08.738: debug : processCallRecvLen:6213 : Got length, now need 56 total (52 more)
11:30:08.738: debug : processCalls:6481 : Giving up the buck 84 0x142c71e0 (nil)
11:30:08.738: debug : call:6656 : All done with our call 84 (nil) 0x142c71e0
11:30:08.738: debug : virStoragePoolNumOfVolumes:6439 : pool=0x142c6ed0
11:30:08.738: debug : call:6555 : Doing call 91 (nil)
11:30:08.738: debug : call:6625 : We have the buck 91 0x142c71e0 0x142c71e0
11:30:08.738: debug : processCallRecvLen:6213 : Got length, now need 32 total (28 more)
11:30:08.738: debug : processCalls:6481 : Giving up the buck 91 0x142c71e0 (nil)
11:30:08.738: debug : call:6656 : All done with our call 91 (nil) 0x142c71e0
11:30:08.738: debug : virStoragePoolListVolumes:6481 : pool=0x142c6ed0, names=0x142c6d00, maxnames=2
11:30:08.738: debug : call:6555 : Doing call 92 (nil)
11:30:08.738: debug : call:6625 : We have the buck 92 0x142c71e0 0x142c71e0
11:30:08.738: debug : processCallRecvLen:6213 : Got length, now need 72 total (68 more)
11:30:08.738: debug : processCalls:6481 : Giving up the buck 92 0x142c71e0 (nil)
11:30:08.738: debug : call:6656 : All done with our call 92 (nil) 0x142c71e0
Name                 Path                                    
-----------------------------------------
11:30:08.738: debug : virStorageVolLookupByName:6555 : pool=0x142c6ed0, name=rhel5u4_x86_64.img
11:30:08.738: debug : call:6555 : Doing call 95 (nil)
11:30:08.738: debug : call:6625 : We have the buck 95 0x142c71e0 0x142c71e0
11:30:08.738: debug : processCallRecvLen:6213 : Got length, now need 108 total (104 more)
11:30:08.738: debug : processCalls:6481 : Giving up the buck 95 0x142c71e0 (nil)
11:30:08.738: debug : call:6656 : All done with our call 95 (nil) 0x142c71e0
11:30:08.738: debug : virStorageVolGetPath:6985 : vol=0x142c71e0
11:30:08.738: debug : call:6555 : Doing call 100 (nil)
11:30:08.738: debug : call:6625 : We have the buck 100 0x142c8210 0x142c8210
11:30:08.739: debug : processCallRecvLen:6213 : Got length, now need 72 total (68 more)
11:30:08.739: debug : processCalls:6481 : Giving up the buck 100 0x142c8210 (nil)
11:30:08.739: debug : call:6656 : All done with our call 100 (nil) 0x142c8210
11:30:08.739: debug : virStorageVolGetName:6685 : vol=0x142c71e0
rhel5u4_x86_64.img   /var/lib/xen/images/rhel5u4_x86_64.img  
11:30:08.739: debug : virStorageVolFree:6828 : vol=0x142c71e0
11:30:08.739: debug : virUnrefStorageVol:747 : unref vol 0x142c71e0 rhel5u4_x86_64.img 1
11:30:08.739: debug : virReleaseStorageVol:705 : release vol 0x142c71e0 rhel5u4_x86_64.img
11:30:08.739: debug : virReleaseStorageVol:717 : unref connection 0x142a6170 6
11:30:08.739: debug : virStorageVolLookupByName:6555 : pool=0x142c6ed0, name=virtimage
11:30:08.739: debug : call:6555 : Doing call 95 (nil)
11:30:08.739: debug : call:6625 : We have the buck 95 0x142c71e0 0x142c71e0
11:30:08.739: debug : processCallRecvLen:6213 : Got length, now need 92 total (88 more)
11:30:08.739: debug : processCalls:6481 : Giving up the buck 95 0x142c71e0 (nil)
11:30:08.739: debug : call:6656 : All done with our call 95 (nil) 0x142c71e0
11:30:08.739: debug : virStorageVolGetPath:6985 : vol=0x142c71e0
11:30:08.739: debug : call:6555 : Doing call 100 (nil)
11:30:08.739: debug : call:6625 : We have the buck 100 0x142c8210 0x142c8210
11:30:08.739: debug : processCallRecvLen:6213 : Got length, now need 64 total (60 more)
11:30:08.739: debug : processCalls:6481 : Giving up the buck 100 0x142c8210 (nil)
11:30:08.739: debug : call:6656 : All done with our call 100 (nil) 0x142c8210
11:30:08.739: debug : virStorageVolGetName:6685 : vol=0x142c71e0
virtimage            /var/lib/xen/images/virtimage           
11:30:08.739: debug : virStorageVolFree:6828 : vol=0x142c71e0
11:30:08.739: debug : virUnrefStorageVol:747 : unref vol 0x142c71e0 virtimage 1
11:30:08.739: debug : virReleaseStorageVol:705 : release vol 0x142c71e0 virtimage
11:30:08.739: debug : virReleaseStorageVol:717 : unref connection 0x142a6170 6
11:30:08.739: debug : virStoragePoolFree:6044 : pool=0x142c6ed0
11:30:08.739: debug : virUnrefStoragePool:609 : unref pool 0x142c6ed0 nfspool 1
11:30:08.739: debug : virReleaseStoragePool:568 : release pool 0x142c6ed0 nfspool
11:30:08.739: debug : virReleaseStoragePool:579 : unref connection 0x142a6170 5

11:30:08.739: debug : virConnectClose:1118 : conn=0x142a6170
11:30:08.739: debug : call:6555 : Doing call 2 (nil)
11:30:08.739: debug : call:6625 : We have the buck 2 0x142c71e0 0x142c71e0
11:30:08.739: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
11:30:08.739: debug : processCalls:6481 : Giving up the buck 2 0x142c71e0 (nil)
11:30:08.739: debug : call:6656 : All done with our call 2 (nil) 0x142c71e0
11:30:08.739: debug : virUnrefConnect:210 : unref connection 0x142a6170 4
11:30:08.739: debug : virUnrefConnect:210 : unref connection 0x142a6170 3
11:30:08.740: debug : virUnrefConnect:210 : unref connection 0x142a6170 2
11:30:08.740: debug : virUnrefConnect:210 : unref connection 0x142a6170 1
11:30:08.740: debug : virReleaseConnect:171 : release connection 0x142a6170
[root@dhcp-66-70-18 ~]# virsh vol-key virtimage
11:31:07.613: debug : virInitialize:290 : register drivers
11:31:07.613: debug : virRegisterDriver:667 : registering Test as driver 0
11:31:07.613: debug : virRegisterNetworkDriver:567 : registering Test as network driver 0
11:31:07.613: debug : virRegisterStorageDriver:598 : registering Test as storage driver 0
11:31:07.613: debug : virRegisterDeviceMonitor:629 : registering Test as device driver 0
11:31:07.613: debug : xenHypervisorInit:1922 : Using new hypervisor call: 30001

11:31:07.613: debug : xenHypervisorInit:1991 : Using hypervisor call v2, sys ver3 dom ver5

11:31:07.613: debug : virRegisterDriver:667 : registering Xen as driver 1
11:31:07.613: debug : virRegisterDriver:667 : registering remote as driver 2
11:31:07.613: debug : virRegisterNetworkDriver:567 : registering remote as network driver 1
11:31:07.614: debug : virRegisterStorageDriver:598 : registering remote as storage driver 1
11:31:07.614: debug : virRegisterDeviceMonitor:629 : registering remote as device driver 1
11:31:07.614: debug : virConnectOpenAuth:1100 : name=(null), auth=0x300509b3c0, flags=0
11:31:07.614: debug : do_open:922 : no name, allowing driver auto-select
11:31:07.614: debug : do_open:930 : trying driver 0 (Test) ...
11:31:07.614: debug : do_open:936 : driver 0 Test returned DECLINED
11:31:07.614: debug : do_open:930 : trying driver 1 (Xen) ...
11:31:07.614: debug : xenUnifiedOpen:295 : Trying hypervisor sub-driver
11:31:07.614: debug : xenUnifiedOpen:298 : Activated hypervisor sub-driver
11:31:07.614: debug : xenUnifiedOpen:306 : Trying XenD sub-driver
11:31:07.615: debug : xenUnifiedOpen:309 : Activated XenD sub-driver
11:31:07.615: debug : xenUnifiedOpen:315 : Trying XM sub-driver
11:31:07.616: debug : xenUnifiedOpen:318 : Activated XM sub-driver
11:31:07.616: debug : xenUnifiedOpen:322 : Trying XS sub-driver
11:31:07.616: debug : xenStoreOpen:346 : Failed to add event handle, disabling events

11:31:07.616: debug : xenUnifiedOpen:325 : Activated XS sub-driver
11:31:07.620: debug : xenUnifiedOpen:361 : Trying Xen inotify sub-driver
11:31:07.620: debug : xenInotifyOpen:439 : Adding a watch on /etc/xen
11:31:07.620: debug : xenInotifyOpen:451 : Building initial config cache
11:31:07.620: debug : xenInotifyOpen:458 : Registering with event loop
11:31:07.620: debug : xenInotifyOpen:462 : Failed to add inotify handle, disabling events
11:31:07.620: debug : virConnectRef:1165 : conn=0x1125e170 refs=3
11:31:07.620: debug : xenUnifiedOpen:364 : Activated Xen inotify sub-driver
11:31:07.620: debug : do_open:936 : driver 1 Xen returned SUCCESS
11:31:07.620: debug : do_open:956 : network driver 0 Test returned DECLINED
11:31:07.620: debug : doRemoteOpen:513 : proceeding with name = xen:///
11:31:07.621: debug : call:6555 : Doing call 66 (nil)
11:31:07.621: debug : call:6625 : We have the buck 66 0x2aaaaaaee010 0x2aaaaaaee010
11:31:07.621: debug : processCallRecvLen:6213 : Got length, now need 36 total (32 more)
11:31:07.621: debug : processCalls:6481 : Giving up the buck 66 0x2aaaaaaee010 (nil)
11:31:07.621: debug : call:6656 : All done with our call 66 (nil) 0x2aaaaaaee010
11:31:07.621: debug : call:6555 : Doing call 1 (nil)
11:31:07.621: debug : call:6625 : We have the buck 1 0x1127f1e0 0x1127f1e0
11:31:07.632: debug : processCallRecvLen:6213 : Got length, now need 28 total (24 more)
11:31:07.632: debug : processCalls:6481 : Giving up the buck 1 0x1127f1e0 (nil)
11:31:07.632: debug : call:6656 : All done with our call 1 (nil) 0x1127f1e0
11:31:07.632: debug : doRemoteOpen:824 : Adding Handler for remote events
11:31:07.632: debug : doRemoteOpen:831 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
11:31:07.632: debug : do_open:956 : network driver 1 remote returned SUCCESS
11:31:07.632: debug : do_open:978 : storage driver 0 Test returned DECLINED
11:31:07.632: debug : do_open:978 : storage driver 1 remote returned SUCCESS
11:31:07.632: debug : do_open:999 : node driver 0 Test returned DECLINED
11:31:07.632: debug : do_open:999 : node driver 1 remote returned SUCCESS
Segmentation fault



libvirt underlying api don't exist this issue,and this a "virsh" abstraction layer bug

Comment 1 Dave Allan 2009-07-02 21:57:32 UTC
Reproduced the problem.  Stack trace is:

(gdb) bt
#0  strcmp () at ../sysdeps/x86_64/strcmp.S:30
#1  0x000000000040580b in vshCommandOpt () at virsh.c:6512
#2  vshCommandOptString (cmd=<value optimized out>, name=0x0, found=0x7fff55862e84) at virsh.c:6547
#3  0x00000000004083b8 in vshCommandOptVolBy (ctl=0x7fff55863130, cmd=0x22319d0, 
    optname=0x41600b "vol", pooloptname=0x0, name=0x0, flag=4) at virsh.c:6756
#4  0x00000000004085db in cmdVolKey (ctl=0x7fff55863130, cmd=0x22319d0) at virsh.c:4665
#5  0x0000000000411f47 in vshCommandRun (ctl=0x7fff55863130, cmd=0x22319d0) at virsh.c:6810
#6  0x00000000004130d6 in main (argc=3, argv=0x7fff55863288) at virsh.c:7763
Current language:  auto; currently asm
(gdb)

Comment 2 Laine Stump 2009-12-09 23:11:37 UTC
Created attachment 377340 [details]
patch to eliminate the segfault

This bug has already been fixed upstream:

http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=544cd630

A patch relative to libvirt-0.6.3 is attached.

Comment 3 Daniel Veillard 2009-12-10 10:59:01 UTC
libvirt-0.6.3-24.el5 has been built in dist-5E-qu-candidate with the fixes

Daniel

Comment 5 Alex Jia 2009-12-16 08:31:10 UTC
segfault has been fixed,but still exist the following issues:
1.virsh vol-key voluuid?
[root@dhcp-66-70-91 xml]# virsh help vol-key
  NAME
    vol-key - convert a vol UUID to vol key

  SYNOPSIS
    vol-key <vol>

  OPTIONS
    <vol>            vol uuid

I use uuidgen to generate a uuid,then editing it to volume xml file,such as:
<volume>
  <name>virtimage</name>
  <uuid>192912a7-8e17-4d15-a90f-cd8792f90951</uuid>
  <key>/var/lib/libvirt/images/virtimage</key>
  <source>
  </source>
  <capacity>10737418240</capacity>
  <allocation>2210500608</allocation>
  <target>
    <path>/var/lib/libvirt/images/virtimage</path>
    <format type='raw'/>
    <permissions>
      <mode>0600</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:nfs_t</label>
    </permissions>
  </target>
</volume>

and create successfully the volume from the xml into a existent nfspool,then I
use "virsh vol-dumpxml" to check the vol information,but "uuid" is nonexistent.
How I to use 'vol-key' option?after I tried the following operations,I list all pool met a error:
[root@dhcp-66-70-91 xml]# virsh vol-key virtimage
error: failed to get vol 'virtimage'
error: this function is not supported by the hypervisor: virStorageVolLookupByPath

[root@dhcp-66-70-91 xml]# virsh vol-key nfspool virtimage
error: unexpected data 'virtimage'

[root@dhcp-66-70-91 xml]# virsh vol-key /var/lib/libvirt/images/virtimage
error: failed to get vol '/var/lib/libvirt/images/virtimage'
error: this function is not supported by the hypervisor: virStorageVolLookupByPath

[root@dhcp-66-70-91 xml]# virsh pool-list --all
error: Failed to list active pools
error: this function is not supported by the hypervisor: virConnectNumOfStoragePools

additional information:

[root@dhcp-66-70-91 xml]# virsh help pool-list
  NAME
    pool-list - list pools

  SYNOPSIS
    pool-list [--inactive] [--all]

  DESCRIPTION
    Returns list of pools.

  OPTIONS
    --inactive       list inactive pools
    --all            list inactive & active pools

Comment 6 Alex Jia 2009-12-16 08:32:14 UTC
[root@dhcp-66-70-91 xml]# uname -a
Linux dhcp-66-70-91.nay.redhat.com 2.6.18-164.el5xen #1 SMP Tue Aug 18 15:59:52 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@dhcp-66-70-91 xml]# rpm -qa|grep libvirt
libvirt-debuginfo-0.6.3-25.el5
libvirt-0.6.3-25.el5
libvirt-python-0.6.3-25.el5

Comment 7 Alex Jia 2009-12-16 08:38:47 UTC
In fact,after issuing these virsh command,libvirt daemon is dead.

[root@dhcp-66-70-91 xml]# service libvirtd status
libvirtd dead but pid file exists

Comment 8 Cole Robinson 2009-12-16 14:47:46 UTC
Storage volumes are not identified by UUID, their unique element is 'key', listed in the XML. Trying to add a UUID element to the XML will be rightly ignored. But we should try to find the source of the libvirtd crash.

How are you 'editing' the volume XML? Are you doing vol-dumpxml; <edit file>; vol-create? An exact sequence of steps of how you are getting the current crash would be helpful.

Comment 9 Laine Stump 2009-12-16 21:03:32 UTC
1) This is a separate issue from the original bug, and should be reported as a new BZ (with a link back to this one). If the original issue is solved, this bug should be closed. This will make it clear which patch is fixing which crash.

2) As Cole suggested, the exact sequence of commmands that got to the crash would be very useful. If it can be reproduced using a volume on a local disk, all the better.

3) In addition, if you can run libvirtd under gdb, and capture a stack trace when it crashes, that would also be very helpful.

Comment 10 Alex Jia 2009-12-17 05:27:52 UTC
1.Cole, I have provided 'editing' volume XML description,please see the above example section,and did virsh vol-dumpxml generating the result is the same to it(except don't exist uuid block).meanwhile did virsh vol-create also used the volume XML.

In addition,I try to create a local dir type pool,but the issues can't be reproduced:
[root@dhcp-66-70-91 libvirt]# virsh pool-create pool-dir.xml
Pool dirpool created from pool-dir.xml

[root@dhcp-66-70-91 libvirt]# virsh pool-list
Name                 State      Autostart
-----------------------------------------
default              active     yes
dirpool              active     no

[root@dhcp-66-70-91 libvirt]# virsh vol-create dirpool vol.xml
Vol virtimage created from vol.xml

[root@dhcp-66-70-91 libvirt]# virsh vol-list dirpool
Name                 Path
-----------------------------------------
virtimage            /var/lib/libvirt/images/virtimage

[root@dhcp-66-70-91 libvirt]# virsh vol-key virtimage
error: failed to get vol 'virtimage'
error: invalid storage volume pointer in no storage vol with matching path

[root@dhcp-66-70-91 libvirt]# virsh vol-key nfspool virtimage
error: unexpected data 'virtimage'

[root@dhcp-66-70-91 libvirt]# virsh vol-key /var/lib/libvirt/images/virtimage
/var/lib/libvirt/images/virtimage

[root@dhcp-66-70-91 libvirt]# virsh pool-list --all
Name                 State      Autostart
-----------------------------------------
dirpool              active     no

[root@dhcp-66-70-91 libvirt]# service libvirtd status
libvirtd (pid  26572) is running...

2.Laine,yup,you are right,I should report a new bug,and I think virsh vol-delete operation lead to libvirt daemon dead.


BTW,I don't know why I haven't permission change bug status to verified.

Comment 11 zhanghaiyan 2010-01-15 05:35:40 UTC
Verified this bug PASS with libvirt-0.6.3-29.el5 on RHEL5.5-Server-x86_64-kvm

Comment 12 Johnny Liu 2010-02-02 04:24:14 UTC
Verified this bug PASS with libvirt-0.6.3-31.el5 on RHEL5.5-Server-x86_64-kvm

Comment 13 Johnny Liu 2010-02-02 04:32:40 UTC
Verified this bug PASS with libvirt-0.6.3-31.el5 on RHEL5.5-Server-x86_64-xen

Comment 15 errata-xmlrpc 2010-03-30 08:11:01 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2010-0205.html


Note You need to log in before you can comment on or make changes to this bug.