RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 855409 - Bug 570359 is reported closed but the problem remains on RHEL-6.3
Summary: Bug 570359 is reported closed but the problem remains on RHEL-6.3
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-07 16:17 UTC by James B. Byrne
Modified: 2012-09-12 13:06 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-09-12 13:06:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvremove -fvvvv output (81.62 KB, text/plain)
2012-09-07 16:17 UTC, James B. Byrne
no flags Details

Description James B. Byrne 2012-09-07 16:17:24 UTC
Created attachment 610791 [details]
lvremove -fvvvv output

Description of problem:
As per Bug 570359 it is impossible to remove a logical volume once created.

Version-Release number of selected component (if applicable):

# lvremove --version
  LVM version:     2.02.95(2)-RHEL6 (2012-05-16)
  Library version: 1.02.74-RHEL6 (2012-05-16)
  Driver version:  4.22.6

How reproducible:


Steps to Reproduce:
1. create lv
2. lvremove lv
3.
  
Actual results:
Error message:   Logical volume vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 is used by another device.

Expected results:

lv should be removed

Additional info:

I attempted to do this in a tight loop of 2000 iterations and I employed 'udevadmin settle' as advised; all without success.  The system has all updates applied as of 2012-09-07.

If this is a udev watch problem is there no way to switch this off temporarily when required as in this situation?

Comment 1 James B. Byrne 2012-09-07 16:18:38 UTC
P.S.

The host is a kvm virtual host and the lv was created in virt-manager as a virtio disk for a guest with space allocated from a lvm storage pool.

Comment 3 James B. Byrne 2012-09-07 16:27:24 UTC
P.P.S.

Am I correct in inferring from this that deleting a virtual guest will not allow one to return the virtual disk space back into the storage pool?

Comment 4 Peter Rajnoha 2012-09-11 10:52:29 UTC
(In reply to comment #0)
> Error message:   Logical volume vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 is
> used by another device.

Please, try to check the output of "lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01" and see whether there's really no other device layered on top of this one... it shouldn't, but let's see...

Comment 5 James B. Byrne 2012-09-11 19:27:50 UTC
# lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01
NAME                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
vg_vhost01-lv_vm_inet05.harte--lyne.ca_01 (dm-17)     253:17   0   3.1G  0 lvm  
└─vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 (dm-18) 253:18   0   3.1G  0 dm

Comment 6 Peter Rajnoha 2012-09-12 05:56:41 UTC
(In reply to comment #5)
> # lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01
> NAME                                                  MAJ:MIN RM   SIZE RO
> TYPE MOUNTPOINT
> vg_vhost01-lv_vm_inet05.harte--lyne.ca_01 (dm-17)     253:17   0   3.1G  0
> lvm  
> └─vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 (dm-18) 253:18   0   3.1G  0 dm

The dm device with the "p1" suffix seems to be a mapping for a partition that is on the logical volume vg_vhost01-lv_vm_inet05.harte--lyne.ca_01. The mapping could be created by directly calling kpartx or partprobe on such a device.

One way the kpartx could be called automatically without notice is within /lib/udev/rules.d/40-multipath.rules if device-mapper-multipath is installed. However this should be done only on multipath devices, which does not seem to be this exact case here. So somewhere else the kpartx/partprobe must be called...

You can try:

  egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/*

...to see whether it's called somewhere else besides 40-multipath.rules.

Comment 7 James B. Byrne 2012-09-12 12:40:11 UTC
# egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/*
/lib/udev/rules.d/40-multipath.rules:RUN+="$env{MPATH_SBIN_PATH}/kpartx -a -p p $tempnode"

It does not appear so.  Would running partprobe manually cause this?  I ask because immediately after I partition the new virtual disk with fdisk I receive an error message telling me to do so, which I have.

Comment 8 James B. Byrne 2012-09-12 12:47:09 UTC
These are the relevant entries in the history file:

799      fdisk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01
800      partprobe
801      /sbin/lvchange -aln /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01

Comment 9 Peter Rajnoha 2012-09-12 13:06:37 UTC
(In reply to comment #7)
> # egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/*
> /lib/udev/rules.d/40-multipath.rules:RUN+="$env{MPATH_SBIN_PATH}/kpartx -a
> -p p $tempnode"
> 
> It does not appear so.  Would running partprobe manually cause this?  I ask
> because immediately after I partition the new virtual disk with fdisk I
> receive an error message telling me to do so, which I have.

Yes, that's exactly the cause.

It's just a new mapping created over the original LV which then needs to be removed manually as well, for example by calling:

  dmsetup remove /dev/mapper/vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1

Or, maybe more proper way would be to remove the partition (with fdisk/parted/...) and just after that calling partprobe which should detect that the partition is removed and it should remove the corresponding mapping then. I think parted can create/remove mappings directly without the extra need to call the partprobe (so its use is more straightforward in this respect).

So this not a bug then, it just needs an extra step to clean up the extra stack above the LV used. I hope this helps.


Note You need to log in before you can comment on or make changes to this bug.