Created attachment 340293 [details] python script for deleting storage pool Description of problem: An exception throw when delete an inactive storage pool. Version-Release number of selected component (if applicable): rhel5u3: 2.6.18-128.1.6.el5xen libvirt-0.6.2-1.el5 How reproducible: Always Steps to Reproduce: 1. Create a storage pool 2. Inactive the pool 3. Remove the pool Actual results: # python storage_pooldelete.py test libvir: Storage error : cannot unlink path '/pool': Is a directory -- OR -- # virsh pool-delete test error: Failed to delete pool test error: cannot unlink path '/pool': Is a directory Expected results: storage pool can be deleted successfully. Additional info:
this issue is still existent with libvirt-0.6.3-6.el5 on rhel5.4 [root@dhcp-66-70-18 libvirt]# uname -r 2.6.18-151.el5xen [root@dhcp-66-70-18 libvirt]# rpm -qa|grep libvirt libvirt-debuginfo-0.6.3-6.el5 libvirt-cim-0.5.5-1.el5 libvirt-0.6.3-6.el5 libvirt-devel-0.6.3-6.el5 libvirt-python-0.6.3-6.el5
Can you provide the XML config of the storage pool you are trying to delete. This sounds like a real bug to me, we just need to figure out which particular scenario this is hitting in
Hi,Daniel nfs storage pool config is as following: <?xml version='1.0' encoding='UTF-8'?> <pool type='netfs'> <name>nfspool</name> <source> <host name='10.66.71.226'/> <dir path='/vol/libvirt1/auto'/> </source> <target> <path>/var/lib/libvirt/images</path> </target> </pool> additional information: [root@dhcp-66-70-97 xml]# rpm -qa|grep libvirt libvirt-0.6.3-20.1.el5_4 libvirt-devel-0.6.3-20.1.el5_4 libvirt-0.6.3-20.1.el5_4 libvirt-python-0.6.3-20.1.el5_4 libvirt-cim-0.5.5-2.el5 libvirt-devel-0.6.3-20.1.el5_4 [root@dhcp-66-70-97 xml]# uname -a Linux dhcp-66-70-97.nay.redhat.com 2.6.18-164.2.1.el5xen #1 SMP Mon Sep 21 04:45:50 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux [root@dhcp-66-70-97 xml]# python Python 2.4.3 (#1, Jun 11 2009, 14:09:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import libvirt >>> conn = libvirt.open(None) >>> print conn.getCapabilities() <capabilities> <host> <cpu> <arch>x86_64</arch> <features> <vmx/> </features> </cpu> <migration_features> <live/> <uri_transports> <uri_transport>xenmigr</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='2'> <cpu id='0'/> <cpu id='1'/> </cpus> </cell> </cells> </topology> </host> <guest> <os_type>xen</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <machine>xenpv</machine> <domain type='xen'> </domain> </arch> </guest> <guest> <os_type>xen</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <machine>xenpv</machine> <domain type='xen'> </domain> </arch> <features> <pae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <loader>/usr/lib/xen/boot/hvmloader</loader> <machine>xenfv</machine> <domain type='xen'> </domain> </arch> <features> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='yes'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <loader>/usr/lib/xen/boot/hvmloader</loader> <machine>xenfv</machine> <domain type='xen'> </domain> </arch> <features> <acpi default='on' toggle='yes'/> <apic default='on' toggle='yes'/> </features> </guest> </capabilities> >>> fp = open("netfspool.xml") >>> stgxml = fp.read() >>> print stgxml <?xml version='1.0' encoding='UTF-8'?> <pool type='netfs'> <name>nfspool</name> <source> <host name='10.66.71.226'/> <dir path='/vol/libvirt1/auto'/> </source> <target> <path>/var/lib/libvirt/images</path> </target> </pool> >>> stg = conn.storagePoolDefineXML(stgxml, 0) >>> dir(stg) ['UUID', 'UUIDString', 'XMLDesc', '__del__', '__doc__', '__init__', '__module__', '_conn', '_o', 'autostart', 'build', 'connect', 'create', 'createXML', 'delete', 'destroy', 'info', 'listVolumes', 'name', 'numOfVolumes', 'ref', 'refresh', 'setAutostart', 'storageVolLookupByName', 'undefine'] >>> stg.info() [0, 0L, 0L, 0L] >>> conn.listDefinedStoragePools() ['nfspool'] >>> stg.delete(0) libvir: Storage error : cannot unlink path '/var/lib/libvirt/images': Is a directory Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib64/python2.4/site-packages/libvirt.py", line 748, in delete if ret == -1: raise libvirtError ('virStoragePoolDelete() failed', pool=self) libvirt.libvirtError: cannot unlink path '/var/lib/libvirt/images': Is a directory
This is the XML config of the pool which I trying to delete: <pool type="dir"> <name>test-dir</name> <target> <path>/var/lib/xen/images</path> </target> </pool>
Created attachment 377844 [details] patch to fix the problem The equivalent patch has also been posted upstream, but not yet ACKed or committed: https://www.redhat.com/archives/libvir-list/2009-December/msg00283.html Explanation: unlink() will not remove a directory, rmdir() is the proper function to call. The error message has also been made slightly more informative. Note that it's intentional that the contents of the directory aren't deleted automatically - this has the effect of requiring that all volumes in the pool be deleted before the pool itself can be deleted.
libvirt-0.6.3-25.el5 has been built in dist-5E-qu-candidate with the fix, Daniel
This bug has been verified with libvirt 0.6.3-25.el5 on RHEL-5.4.but I haven't permission change bug status to verified.
Additional information: [root@dhcp-66-70-91 xml]# python Python 2.4.3 (#1, Jun 11 2009, 14:09:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import libvirt >>> conn = libvirt.open(None) >>> print conn.getCapabilities() <capabilities> <host> <cpu> <arch>x86_64</arch> <features> <vmx/> </features> </cpu> <migration_features> <live/> <uri_transports> <uri_transport>xenmigr</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='2'> <cpu id='0'/> <cpu id='1'/> </cpus> </cell> </cells> </topology> </host> <guest> <os_type>xen</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <machine>xenpv</machine> <domain type='xen'> </domain> </arch> </guest> <guest> <os_type>xen</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <machine>xenpv</machine> <domain type='xen'> </domain> </arch> <features> <pae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <loader>/usr/lib/xen/boot/hvmloader</loader> <machine>xenfv</machine> <domain type='xen'> </domain> </arch> <features> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='yes'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/lib64/xen/bin/qemu-dm</emulator> <loader>/usr/lib/xen/boot/hvmloader</loader> <machine>xenfv</machine> <domain type='xen'> </domain> </arch> <features> <acpi default='on' toggle='yes'/> <apic default='on' toggle='yes'/> </features> </guest> </capabilities> >>> fp = open("nfspool.xml") >>> stgxml = fp.read() >>> print stgxml <?xml version='1.0' encoding='UTF-8'?> <pool type='netfs'> <name>nfspool</name> <source> <host name='10.66.90.115'/> <dir path='/vol/libvirt1/auto'/> </source> <target> <path>/var/lib/libvirt/images</path> </target> </pool> >>> stg = conn.storagePoolDefineXML(stgxml, 0) >>> stg.info() [0, 0L, 0L, 0L] >>> stg.delete(0) 0
Verified with libvirt 0.6.3-25.el5 on RHEL-5.4, it's already fixed. # virsh pool-define dir.xml Pool test-dir defined from dir.xml # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes pool-mig inactive no pool-migration inactive no test-dir inactive no # virsh pool-delete test-dir Pool test-dir deleted
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0205.html