Bug 855409

Summary: Bug 570359 is reported closed but the problem remains on RHEL-6.3
Product: Red Hat Enterprise Linux 6 Reporter: James B. Byrne <byrnejb>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
Status: CLOSED NOTABUG QA Contact: Cluster QE <mspqa-list>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.3CC: agk, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-09-12 13:06:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
lvremove -fvvvv output none

Description James B. Byrne 2012-09-07 16:17:24 UTC
Created attachment 610791 [details]
lvremove -fvvvv output

Description of problem:
As per Bug 570359 it is impossible to remove a logical volume once created.

Version-Release number of selected component (if applicable):

# lvremove --version
  LVM version:     2.02.95(2)-RHEL6 (2012-05-16)
  Library version: 1.02.74-RHEL6 (2012-05-16)
  Driver version:  4.22.6

How reproducible:


Steps to Reproduce:
1. create lv
2. lvremove lv
3.
  
Actual results:
Error message:   Logical volume vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 is used by another device.

Expected results:

lv should be removed

Additional info:

I attempted to do this in a tight loop of 2000 iterations and I employed 'udevadmin settle' as advised; all without success.  The system has all updates applied as of 2012-09-07.

If this is a udev watch problem is there no way to switch this off temporarily when required as in this situation?

Comment 1 James B. Byrne 2012-09-07 16:18:38 UTC
P.S.

The host is a kvm virtual host and the lv was created in virt-manager as a virtio disk for a guest with space allocated from a lvm storage pool.

Comment 3 James B. Byrne 2012-09-07 16:27:24 UTC
P.P.S.

Am I correct in inferring from this that deleting a virtual guest will not allow one to return the virtual disk space back into the storage pool?

Comment 4 Peter Rajnoha 2012-09-11 10:52:29 UTC
(In reply to comment #0)
> Error message:   Logical volume vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 is
> used by another device.

Please, try to check the output of "lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01" and see whether there's really no other device layered on top of this one... it shouldn't, but let's see...

Comment 5 James B. Byrne 2012-09-11 19:27:50 UTC
# lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01
NAME                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
vg_vhost01-lv_vm_inet05.harte--lyne.ca_01 (dm-17)     253:17   0   3.1G  0 lvm  
└─vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 (dm-18) 253:18   0   3.1G  0 dm

Comment 6 Peter Rajnoha 2012-09-12 05:56:41 UTC
(In reply to comment #5)
> # lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01
> NAME                                                  MAJ:MIN RM   SIZE RO
> TYPE MOUNTPOINT
> vg_vhost01-lv_vm_inet05.harte--lyne.ca_01 (dm-17)     253:17   0   3.1G  0
> lvm  
> └─vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 (dm-18) 253:18   0   3.1G  0 dm

The dm device with the "p1" suffix seems to be a mapping for a partition that is on the logical volume vg_vhost01-lv_vm_inet05.harte--lyne.ca_01. The mapping could be created by directly calling kpartx or partprobe on such a device.

One way the kpartx could be called automatically without notice is within /lib/udev/rules.d/40-multipath.rules if device-mapper-multipath is installed. However this should be done only on multipath devices, which does not seem to be this exact case here. So somewhere else the kpartx/partprobe must be called...

You can try:

  egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/*

...to see whether it's called somewhere else besides 40-multipath.rules.

Comment 7 James B. Byrne 2012-09-12 12:40:11 UTC
# egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/*
/lib/udev/rules.d/40-multipath.rules:RUN+="$env{MPATH_SBIN_PATH}/kpartx -a -p p $tempnode"

It does not appear so.  Would running partprobe manually cause this?  I ask because immediately after I partition the new virtual disk with fdisk I receive an error message telling me to do so, which I have.

Comment 8 James B. Byrne 2012-09-12 12:47:09 UTC
These are the relevant entries in the history file:

799      fdisk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01
800      partprobe
801      /sbin/lvchange -aln /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01

Comment 9 Peter Rajnoha 2012-09-12 13:06:37 UTC
(In reply to comment #7)
> # egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/*
> /lib/udev/rules.d/40-multipath.rules:RUN+="$env{MPATH_SBIN_PATH}/kpartx -a
> -p p $tempnode"
> 
> It does not appear so.  Would running partprobe manually cause this?  I ask
> because immediately after I partition the new virtual disk with fdisk I
> receive an error message telling me to do so, which I have.

Yes, that's exactly the cause.

It's just a new mapping created over the original LV which then needs to be removed manually as well, for example by calling:

  dmsetup remove /dev/mapper/vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1

Or, maybe more proper way would be to remove the partition (with fdisk/parted/...) and just after that calling partprobe which should detect that the partition is removed and it should remove the corresponding mapping then. I think parted can create/remove mappings directly without the extra need to call the partprobe (so its use is more straightforward in this respect).

So this not a bug then, it just needs an extra step to clean up the extra stack above the LV used. I hope this helps.