Red Hat Bugzilla – Bug 162704
Long-running LVM2 processes fail to handle VG deletion followed by new VG creation with same name as old one
Last modified: 2007-11-30 17:07:19 EST
Here is the view of my volumes across all three node in the cluster:
[root@link-11 ~]# pvscan
PV /dev/sda1 VG link101112 lvm2 [135.64 GB / 0 free]
PV /dev/sdb1 VG link101112 lvm2 [135.64 GB / 0 free]
PV /dev/sdc1 lvm2 [135.65 GB]
PV /dev/sdd1 lvm2 [135.65 GB]
PV /dev/sde1 lvm2 [135.66 GB]
PV /dev/sdf1 lvm2 [135.66 GB]
PV /dev/sdg1 lvm2 [135.66 GB]
Total: 7 [949.56 GB] / in use: 2 [271.29 GB] / in no VG: 5 [678.27 GB]
[root@link-11 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "link101112" using metadata type lvm2
[root@link-11 ~]# lvscan
ACTIVE '/dev/link101112/lvol0' [271.29 GB] inherit
I then delete the lv and vg...
[root@link-11 ~]# lvremove /dev/link101112/lvol0
Do you really want to remove active logical volume "lvol0"? [y/n]: y
Logical volume "lvol0" successfully removed
[root@link-11 ~]# vgremove link101112
Volume group "link101112" successfully removed
I then recreate the vg and lv but with less space (one less pv)...
[root@link-11 ~]# vgcreate link101112 /dev/sda1
Volume group "link101112" successfully created
Now, if I try and create the lv I'll see locking errors when trying to activate,
even if I do a vgscan on all nodes in the cluster before doing so (which I
shouldn't have to do).
[root@link-11 ~]# lvcreate -l 34724 link101112
Error locking on node link-12: Internal lvm error, check syslog
Error locking on node link-10: Internal lvm error, check syslog
Error locking on node link-11: Internal lvm error, check syslog
Failed to activate new LV.
Jul 7 11:01:09 link-10 lvm: Volume group link101112 metadata is inconsistent
Jul 7 11:01:09 link-10 lvm: Volume group for uuid not found:
Jul 7 11:01:18 link-12 lvm: Volume group link101112 metadata is inconsistent
Jul 7 11:01:18 link-12 lvm: Volume group for uuid not found:
If I then go and lvscan and lvchange on all nodes, it will work, but I shouldn't
have to do that.
Version-Release number of selected component (if applicable):
Cluster LVM daemon version: 2.01.09 (2005-04-04)
Protocol version: 0.2.1
Probably similar to bug 138396.
What's happening here is one VG is getting deleted then another one created with
the same name, and clvmd is failing to detect and handle this.
Same cause as bug 147361.
Patch added to cvs that might solve this - will be in 2.02.06.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
I can not reproduce this anymore with the latest rpms:
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.