Description of problem: After a PV which is a part of a VG fails and is removed from the VG by "vgreduce --removemissing", any activation attempt fails on the VG if the PV is revived and old version of metadata exists in the system. This can be a boot failure problem, if the PV revival happens during system reboot. So activation and other commands should succeed by ignoring old version of metadata. Version-Release number of selected component: lvm2-2.02.26-2.el5 How reproducible: Always Steps to Reproduce: 1. Prepare 2 PVs and create a VG from the 2 PVs. # pvcreate /dev/sda # pvcreate /dev/sdb # vgcreate vg0 /dev/sda /dev/sdb 2. Create LVs on each PV. # lvcreate -L 12m -n lv0 vg0 /dev/sda # lvcreate -L 12m -n lv0 vg0 /dev/sdb 3. Disable a PV in the VG. (e.g. phsically disconnect the disk) # echo offline > /sys/block/device/sda 4. Remove the failed PV and LV from the VG. # vgreduce --removemissing vg0 5. Deactivate the VG. # vgchange -an 6. Re-enable the failed PV which was removed from the VG. (e.g. connect the disk again and reboot) # echo running > /sys/block/device/sda 7. Activete the VG. # vgchange -ay Attached script is a testcase doing similar things above. Actual results: vgchange fails with the following message. ---------------------------------------------------------------------- # vgchange -ay Volume group "vg0" inconsistent Inconsistent metadata found for VG vg0 - updating to use version 5 Removing PV /dev/mapper/pv0 (Y3CEa4-dLrh-c7Iw-3etJ-p2Rx-s6oi-a24SsA) that no longer belongs to VG vg0 Assertion failed: can't _pv_write non-orphan PV (in VG ) Failed to clear metadata from physical volume "/dev/mapper/pv0" after removal from "vg0" ---------------------------------------------------------------------- Expected results: vgchange succeeds. Additional info:
This scenario should become part of the partial volume group work we are planning for 5.3. My current idea is that we do not wipe the metadata (from stray PV) in activation, but honour only the newest version. This should be consistent with how partial activation works. This may be subject of further discussion though.
This is fixed with last lvm2 build (lvm2-2.02.40-4.el5) - will be in RHEL5.3 update.
Confirmed that this is fixed with lvm2-2.02.40-4.el5.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2009-0179.html