Bug 438128

Summary: HA LVM should not remove not mirrored lvs when it tries to make the vg consistent.
Product: Red Hat Enterprise Linux 5 Reporter: Simone Gotti <simone.gotti>
Component: rgmanagerAssignee: Jonathan Earl Brassow <jbrassow>
Status: CLOSED CANTFIX QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: 5.2CC: cluster-maint, edamato
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-05-19 12:16:33 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Simone Gotti 2008-03-19 10:17:25 UTC
Description of problem:

Shared VG managed by HA LVM (single or multiple LV per VG).

In the event of a temporary loss of one of the disks (PV) that are part of that
VG, lvm_by_vg.sh and lvm_by_lv.sh launches (on start) a "vgreduce
--removemissing". This will remove all the LVs that have a piece on the lost PV.
Forcing the admin to restore the LVM metadata from a backup copy.

It'll be a better idea to launch this only on mirrored LVs maybe adding the
"--mirrorsonly" option to vgreduce:

"vgreduce --removemissing --mirrorsonly"

What do you think?

Thanks!
Bye!

Comment 1 Jonathan Earl Brassow 2008-05-19 12:16:33 UTC
Unfortunately, the '--mirrorsonly' parameter does not work that way.  It means,
"fail the operation if there are other volumes that are not mirrors".  This
would mean that if one site would fail, there would be no way to activate the
volume on the next machine (because it would be unable to clean-up the VG).

You may wish to open a new bug requesting better handling of this type of
scenario.  It may make you feel better to know that we are in the process of
improving this.  Unfortunately, I can't give you an ETA.

Comment 2 Simone Gotti 2008-06-30 11:06:33 UTC
Thanks for the answer. Yes, adding --mirrorsonly was only a (wrong) suggestion. :D

I'm using HA LVM not to mirror 2 storages (I've got only 1 storage) but just to
avoid the use of clvmd for various reasons:

1) I'm using an ext3 FS and I'd like to have the LVs active only on one node to
minimize problems and avoid that a bad admin tries to mount the fs on more than
1 node at the same time.

2) clvmd is not able to handle different storage views (bug #246630 and others)
(for example if a machine see less pv than another due to cabling problems or
others type) 


From what I read on bugzilla and on the lvm-devel mailing list looks like
"partial volume group" handling is targeted for 5.3 so this "issue" will be
fixed with this new way of managing temporary lost PVs?

If not, would you accept a patch to add an options to the HA lvm agent to let
the user disable the call to vgreduce?