Bug 438128 - HA LVM should not remove not mirrored lvs when it tries to make the vg consistent.
HA LVM should not remove not mirrored lvs when it tries to make the vg consis...
Status: CLOSED CANTFIX
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: rgmanager (Show other bugs)
5.2
All Linux
low Severity low
: rc
: ---
Assigned To: Jonathan Earl Brassow
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-03-19 06:17 EDT by Simone Gotti
Modified: 2009-04-16 18:53 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-05-19 08:16:33 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Simone Gotti 2008-03-19 06:17:25 EDT
Description of problem:

Shared VG managed by HA LVM (single or multiple LV per VG).

In the event of a temporary loss of one of the disks (PV) that are part of that
VG, lvm_by_vg.sh and lvm_by_lv.sh launches (on start) a "vgreduce
--removemissing". This will remove all the LVs that have a piece on the lost PV.
Forcing the admin to restore the LVM metadata from a backup copy.

It'll be a better idea to launch this only on mirrored LVs maybe adding the
"--mirrorsonly" option to vgreduce:

"vgreduce --removemissing --mirrorsonly"

What do you think?

Thanks!
Bye!
Comment 1 Jonathan Earl Brassow 2008-05-19 08:16:33 EDT
Unfortunately, the '--mirrorsonly' parameter does not work that way.  It means,
"fail the operation if there are other volumes that are not mirrors".  This
would mean that if one site would fail, there would be no way to activate the
volume on the next machine (because it would be unable to clean-up the VG).

You may wish to open a new bug requesting better handling of this type of
scenario.  It may make you feel better to know that we are in the process of
improving this.  Unfortunately, I can't give you an ETA.
Comment 2 Simone Gotti 2008-06-30 07:06:33 EDT
Thanks for the answer. Yes, adding --mirrorsonly was only a (wrong) suggestion. :D

I'm using HA LVM not to mirror 2 storages (I've got only 1 storage) but just to
avoid the use of clvmd for various reasons:

1) I'm using an ext3 FS and I'd like to have the LVs active only on one node to
minimize problems and avoid that a bad admin tries to mount the fs on more than
1 node at the same time.

2) clvmd is not able to handle different storage views (bug #246630 and others)
(for example if a machine see less pv than another due to cabling problems or
others type) 


From what I read on bugzilla and on the lvm-devel mailing list looks like
"partial volume group" handling is targeted for 5.3 so this "issue" will be
fixed with this new way of managing temporary lost PVs?

If not, would you accept a patch to add an options to the HA lvm agent to let
the user disable the call to vgreduce?

Note You need to log in before you can comment on or make changes to this bug.