Description of problem: I was unable to remove a vg due to there "still being a LV present". This may just be a single lvm bug, but the only time I've ever seen this was with clvmd. Plus it seems like the cluster aspect may be what causing this phantom LV. [root@link-02 ~]# pvscan PV /dev/sda1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdb1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdc1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdd1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sde1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdf1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdg1 VG corey lvm2 [135.66 GB / 135.66 GB free] Total: 7 [949.59 GB] / in use: 7 [949.59 GB] / in no VG: 0 [0 ] [root@link-02 ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "corey" using metadata type lvm2 [root@link-02 ~]# lvscan [root@link-02 ~]# lvs -a -o +devices [root@link-02 ~]# dmsetup ls VolGroup00-LogVol01 (253, 1) VolGroup00-LogVol00 (253, 0) [root@link-02 ~]# vgremove corey Volume group "corey" still contains 1 logical volume(s) [root@link-04 ~]# pvscan PV /dev/sda1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdb1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdc1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdd1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sde1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdf1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdg1 VG corey lvm2 [135.66 GB / 135.66 GB free] Total: 7 [949.59 GB] / in use: 7 [949.59 GB] / in no VG: 0 [0 ] [root@link-04 ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "corey" using metadata type lvm2 [root@link-04 ~]# lvscan [root@link-04 ~]# lvs -a -o +devices [root@link-04 ~]# dmsetup ls VolGroup00-LogVol01 (253, 1) VolGroup00-LogVol00 (253, 0) [root@link-04 ~]# vgremove corey Volume group "corey" still contains 1 logical volume(s) root@link-07 ~]# pvscan PV /dev/sda1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdb1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdc1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdd1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sde1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdf1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdg1 VG corey lvm2 [135.66 GB / 135.66 GB free] Total: 7 [949.59 GB] / in use: 7 [949.59 GB] / in no VG: 0 [0 ] [root@link-07 ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "corey" using metadata type lvm2 [root@link-07 ~]# lvscan [root@link-07 ~]# lvs -a -o +devices [root@link-07 ~]# dmsetup ls VolGroup00-LogVol01 (253, 1) VolGroup00-LogVol00 (253, 0) [root@link-07 ~]# vgremove corey Volume group "corey" still contains 1 logical volume(s) [root@link-08 ~]# pvscan PV /dev/sda1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdb1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdc1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdd1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sde1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdf1 VG corey lvm2 [135.66 GB / 135.66 GB free] PV /dev/sdg1 VG corey lvm2 [135.66 GB / 135.66 GB free] Total: 7 [949.59 GB] / in use: 7 [949.59 GB] / in no VG: 0 [0 ] [root@link-08 ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "corey" using metadata type lvm2 [root@link-08 ~]# lvscan [root@link-08 ~]# lvs -a -o +devices [root@link-08 ~]# dmsetup ls VolGroup00-LogVol01 (253, 1) VolGroup00-LogVol00 (253, 0) [root@link-08 ~]# vgremove corey Volume group "corey" still contains 1 logical volume(s) Version-Release number of selected component (if applicable): [root@link-02 ~]# rpm -qa | grep lvm2 lvm2-cluster-2.02.13-1 lvm2-cluster-debuginfo-2.02.06-7.0.RHEL4 lvm2-2.02.13-1
It would be intersting to know what the other cluster members thought the VG state was - and what the lvm metadata looked like.
Created attachment 140670 [details] metadata
hmm, though the metadata looks a bit garbled, there seem to be quite a few mirror components listed in there. brassow may be interested it in that.
Assign to Jon as it looks (to me) like there are mirror components being left lying around, these would be hidden from lvs command (though not lvs -a). Send it back if that's not true :)
can you reproduce this? and what happens when you do a 'lvremove -ff corey' before trying the 'vgremove corey'. (Yes, I understand that whatever lv is blocking is not being printed.)
Looks like we were in the process of down converting/ removing... were did this "mirror3_mimage_0" come from... Looking at the attachment, seqno = 558 is the last one issued. 555: "mirror" "mirror_mlog "mirror_mimage_1" "mirror_mimage_0" "mirror3_mimage_0" - still there... 556: "mirror" - now linear "mirror_mlog "mirror_mimage_1" "mirror_mimage_0" - now with zero seg count "mirror3_mimage_0" - still there... 557: "mirror" - linear "mirror3_mimage_0" still there... 558: "mirror3_mimage_0" still there...
the history of were "mirror3_mimage_0" came from has long since been lost by the circular buffer. (Hundreds of changes since it was created.) Recreatable?
(there was a very old bug that used to leave mirror images behind - also could be udev related)
I have not gotten LVM to run across this problem on its own... However, if I inject an LV with the same characteristics of Corey's "mirror3_mimage_0" (i.e. an LV with 0 segments), I get the same results. The work-around that I suggested above solves the issue - 'lvremove -ff <vg name>'. How you ever got into this state is another question. Reproduce, and we'll talk. :) [root@neo-04 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Devices LogVolKS00 VolGroupKS00 -wi-ao 36.62G /dev/hda2(0) LogVolKS01 VolGroupKS00 -wi-ao 512.00M /dev/hda2(1172) [root@neo-04 ~]# dmsetup ls VolGroupKS00-LogVolKS01 (253, 1) VolGroupKS00-LogVolKS00 (253, 0) [root@neo-04 ~]# vgs VG #PV #LV #SN Attr VSize VFree VG Tags VolGroupKS00 1 2 0 wz--n- 37.16G 32.00M neo-04.lab.msp.redhat.com vg 7 1 0 wz--nc 120.01G 120.01G [root@neo-04 ~]# vgremove vg Volume group "vg" still contains 1 logical volume(s) [root@neo-04 ~]# lvremove -ff vg Logical volume "brassow" successfully removed [root@neo-04 ~]# vgremove vg Volume group "vg" successfully removed
In response to comment #8... This is not the old bug that left mirror images behind. That was due to dmeventd waiting on a device and therefore not allowing device-mapper to pull it out. This bug has to do with conversion. If you look at the stages outlined in comment #6, you can see that one of the images goes to 0 segments before being removed on the conversion. If it is not removed, this bug will occur. I don't know if this bug affects only the LV with 0 segments or if it could affect them all - Corey's trace didn't go back far enough.
hmmm, found corey's archives on link-02 :)
Created attachment 141202 [details] Fossil records all the archives from when mirror3 existed to when it got screwed over. (Pulled from link-02... I don't know if the other machine(s) was also doing operations.)
but could it still have been udev holding the device preventing lvm2 removing it?
Created attachment 141703 [details] patch
Created attachment 141704 [details] patch We need to remove the phantom LV (<lvname>_mimage_0) that has no segments before writing out the metadata.
Created attachment 141710 [details] Good Patch We no longer write out an lv that has 0 segments in the metadata. We instead replace the segment with the "error" segtype.
assigning to agk to review, add to repo, and mark as POST/MODIFIED.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion.
This bug has not been seen with the lastest lvm2 builds, marking verified.
A package has been built which should help the problem described in this bug report. This report is therefore being closed with a resolution of CURRENTRELEASE. You may reopen this bug report if the solution does not work for you.