RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 591390 - Can't remove logical volume
Summary: Can't remove logical volume
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Dave Allan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: Rhel6.1LibvirtTier1
TreeView+ depends on / blocked
 
Reported: 2010-05-12 05:59 UTC by Alex Jia
Modified: 2016-04-26 13:57 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-10-11 15:28:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alex Jia 2010-05-12 05:59:12 UTC
Description of problem:
we need to focus on the following designing of the libvirt:
1.can get volume object if and only if storage pool is active
2.may remove storage volume if and only if storage pool is inactive

it seems conflict, when we destroy stroage pool, we can't get volume object,
hence,we can't also call volume object method of 'delete',finally, we can't
remove the logical volume.

Version-Release number of selected component (if applicable):
[root@dhcp-66-70-62 libvirt-test-API]# uname -a
Linux dhcp-66-70-62.nay.redhat.com 2.6.32-20.el6.x86_64 #1 SMP Tue Apr 6 13:40:08 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

[root@dhcp-66-70-62 libvirt-test-API]# cat /etc/redhat-release 
Red Hat Enterprise Linux release 6.0 Beta (Santiago)

[root@dhcp-66-70-62 libvirt-test-API]# rpm -qa|grep libvirt
libvirt-java-devel-0.4.2-2.el6.noarch
libvirt-0.8.1-2.el6.x86_64
libvirt-python-0.8.1-2.el6.x86_64
libvirt-debuginfo-0.8.1-2.el6.x86_64
libvirt-cim-0.5.8-2.el6.x86_64
libvirt-cim-debuginfo-0.5.8-2.el6.x86_64
libvirt-java-0.4.2-2.el6.noarch
libvirt-client-0.8.1-2.el6.x86_64
libvirt-devel-0.8.1-2.el6.x86_64

[root@dhcp-66-70-62 libvirt-test-API]# rpm -qa|grep kvm
qemu-kvm-tools-0.12.1.2-2.48.el6.x86_64
qemu-kvm-debuginfo-0.12.1.2-2.48.el6.x86_64
qemu-kvm-0.12.1.2-2.48.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1.create a free partition using 'fdisk' command
2.define logical type storage pool and build it
3.start the logical storage pool
4.create a logical type storage volume in the pool
5.remove the storage volume, if it is failed, jump step6
6.destroy the storage volume
7.repeat 5
  
Actual results:
neglect creation process of the logical type storage pool and volume

[root@dhcp-66-70-62 tests]# virsh pool-list
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
HostVG               active     no        
virtimages           active     no

[root@dhcp-66-70-62 tests]# virsh vol-list HostVG
Name                 Path                                    
-----------------------------------------
Swap                 /dev/HostVG/Swap

[root@dhcp-66-70-62 tests]# virsh vol-dumpxml --pool HostVG Swap
<volume>
  <name>Swap</name>
  <key>VZ4KCs-McsH-3z8U-6fGx-b2q2-XhOx-V11K4R</key>
  <source>
    <device path='/dev/sda6'>
      <extent start='0' end='4194304'/>
    </device>
  </source>
  <capacity>4194304</capacity>
  <allocation>4194304</allocation>
  <target>
    <path>/dev/HostVG/Swap</path>
    <permissions>
      <mode>0660</mode>
      <owner>0</owner>
      <group>6</group>
      <label>system_u:object_r:fixed_disk_device_t:s0</label>
    </permissions>
  </target>
</volume>

[root@dhcp-66-70-62 tests]# virsh vol-delete --pool HostVG Swap
error: Failed to delete vol Swap
error: internal error '/sbin/lvremove -f /dev/HostVG/Swap' exited with non-zero status 5 and signal 0:   Can't remove open logical volume "Swap"

[root@dhcp-66-70-62 tests]# virsh pool-destroy HostVG
Pool HostVG destroyed

[root@dhcp-66-70-62 tests]# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
virtimages           active     no        
dirpool              inactive   no        
HostVG               inactive   no                
nfspool              inactive   no

[root@dhcp-66-70-62 tests]# virsh vol-delete --pool HostVG Swap
error: failed to get vol 'Swap'
error: invalid storage volume pointer in no storage vol with matching path

[root@dhcp-66-70-62 tests]# lvremove -f /dev/HostVG/Swap
  Logical volume "Swap" successfully removed

Expected results:
1.can get storage volume object from inactive storage pool(defined storage pool)
or
2.can remove storage volume from active storage pool
or
fix it

Additional info:

Comment 2 RHEL Program Management 2010-05-12 07:05:20 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 RHEL Program Management 2010-05-12 07:07:26 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 4 RHEL Program Management 2010-05-12 07:09:42 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 5 RHEL Program Management 2010-05-12 07:12:02 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 Wayne Sun 2010-08-19 07:54:05 UTC
After # virsh vol-create a vol in a logical pool

# virsh vol-delete --pool pool-logical-testing vol-logical-testing
error: Failed to delete vol vol-logical-testing
error: internal error '/sbin/lvremove -f /dev/pool-logical-testing/vol-logical-testing' exited with non-zero status 5 and signal 0:   Can't remove open logical volume "vol-logical-testing"

# virsh pool-destroy pool-logical-testing
error: Failed to destroy pool pool-logical-testing
error: internal error '/sbin/vgchange -an pool-logical-testing' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "pool-logical-testing" with 1 open logical volume(s)

But, after umount the vol,

# umount /tmp/logical-vol-mnt-testing

Then,

# virsh vol-delete --pool pool-logical-testing vol-logical-testing
Vol vol-logical-testing deleted

# virsh pool-destroy pool-logical-testing
Pool pool-logical-testing destroyed

Comment 8 Daniel Berrangé 2010-08-19 08:10:50 UTC
This seems to be operating as designed. If you have mounted the logical volume as a filesystem, then clearly it is in use by the OS, so  vol-delete is expected to fail.

Comment 9 Dave Allan 2010-10-11 15:28:25 UTC
Agreed with comment 8; closing as NOTABUG.


Note You need to log in before you can comment on or make changes to this bug.