Bug 855409
Summary: | Bug 570359 is reported closed but the problem remains on RHEL-6.3 | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | James B. Byrne <byrnejb> | ||||
Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> | ||||
Status: | CLOSED NOTABUG | QA Contact: | Cluster QE <mspqa-list> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 6.3 | CC: | agk, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2012-09-12 13:06:37 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
James B. Byrne
2012-09-07 16:17:24 UTC
P.S. The host is a kvm virtual host and the lv was created in virt-manager as a virtio disk for a guest with space allocated from a lvm storage pool. P.P.S. Am I correct in inferring from this that deleting a virtual guest will not allow one to return the virtual disk space back into the storage pool? (In reply to comment #0) > Error message: Logical volume vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 is > used by another device. Please, try to check the output of "lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01" and see whether there's really no other device layered on top of this one... it shouldn't, but let's see... # lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vg_vhost01-lv_vm_inet05.harte--lyne.ca_01 (dm-17) 253:17 0 3.1G 0 lvm └─vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 (dm-18) 253:18 0 3.1G 0 dm (In reply to comment #5) > # lsblk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 > NAME MAJ:MIN RM SIZE RO > TYPE MOUNTPOINT > vg_vhost01-lv_vm_inet05.harte--lyne.ca_01 (dm-17) 253:17 0 3.1G 0 > lvm > └─vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 (dm-18) 253:18 0 3.1G 0 dm The dm device with the "p1" suffix seems to be a mapping for a partition that is on the logical volume vg_vhost01-lv_vm_inet05.harte--lyne.ca_01. The mapping could be created by directly calling kpartx or partprobe on such a device. One way the kpartx could be called automatically without notice is within /lib/udev/rules.d/40-multipath.rules if device-mapper-multipath is installed. However this should be done only on multipath devices, which does not seem to be this exact case here. So somewhere else the kpartx/partprobe must be called... You can try: egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/* ...to see whether it's called somewhere else besides 40-multipath.rules. # egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/* /lib/udev/rules.d/40-multipath.rules:RUN+="$env{MPATH_SBIN_PATH}/kpartx -a -p p $tempnode" It does not appear so. Would running partprobe manually cause this? I ask because immediately after I partition the new virtual disk with fdisk I receive an error message telling me to do so, which I have. These are the relevant entries in the history file: 799 fdisk /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 800 partprobe 801 /sbin/lvchange -aln /dev/vg_vhost01/lv_vm_inet05.harte-lyne.ca_01 (In reply to comment #7) > # egrep "(kpartx|partprobe)" /etc/udev/rules.d/* /lib/udev/rules.d/* > /lib/udev/rules.d/40-multipath.rules:RUN+="$env{MPATH_SBIN_PATH}/kpartx -a > -p p $tempnode" > > It does not appear so. Would running partprobe manually cause this? I ask > because immediately after I partition the new virtual disk with fdisk I > receive an error message telling me to do so, which I have. Yes, that's exactly the cause. It's just a new mapping created over the original LV which then needs to be removed manually as well, for example by calling: dmsetup remove /dev/mapper/vg_vhost01-lv_vm_inet05.harte--lyne.ca_01p1 Or, maybe more proper way would be to remove the partition (with fdisk/parted/...) and just after that calling partprobe which should detect that the partition is removed and it should remove the corresponding mapping then. I think parted can create/remove mappings directly without the extra need to call the partprobe (so its use is more straightforward in this respect). So this not a bug then, it just needs an extra step to clean up the extra stack above the LV used. I hope this helps. |