Description of problem: Shared VG managed by HA LVM (single or multiple LV per VG). In the event of a temporary loss of one of the disks (PV) that are part of that VG, lvm_by_vg.sh and lvm_by_lv.sh launches (on start) a "vgreduce --removemissing". This will remove all the LVs that have a piece on the lost PV. Forcing the admin to restore the LVM metadata from a backup copy. It'll be a better idea to launch this only on mirrored LVs maybe adding the "--mirrorsonly" option to vgreduce: "vgreduce --removemissing --mirrorsonly" What do you think? Thanks! Bye!
Unfortunately, the '--mirrorsonly' parameter does not work that way. It means, "fail the operation if there are other volumes that are not mirrors". This would mean that if one site would fail, there would be no way to activate the volume on the next machine (because it would be unable to clean-up the VG). You may wish to open a new bug requesting better handling of this type of scenario. It may make you feel better to know that we are in the process of improving this. Unfortunately, I can't give you an ETA.
Thanks for the answer. Yes, adding --mirrorsonly was only a (wrong) suggestion. :D I'm using HA LVM not to mirror 2 storages (I've got only 1 storage) but just to avoid the use of clvmd for various reasons: 1) I'm using an ext3 FS and I'd like to have the LVs active only on one node to minimize problems and avoid that a bad admin tries to mount the fs on more than 1 node at the same time. 2) clvmd is not able to handle different storage views (bug #246630 and others) (for example if a machine see less pv than another due to cabling problems or others type) From what I read on bugzilla and on the lvm-devel mailing list looks like "partial volume group" handling is targeted for 5.3 so this "issue" will be fixed with this new way of managing temporary lost PVs? If not, would you accept a patch to add an options to the HA lvm agent to let the user disable the call to vgreduce?