Bug 438128
Summary: | HA LVM should not remove not mirrored lvs when it tries to make the vg consistent. | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Simone Gotti <simone.gotti> |
Component: | rgmanager | Assignee: | Jonathan Earl Brassow <jbrassow> |
Status: | CLOSED CANTFIX | QA Contact: | |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 5.2 | CC: | cluster-maint, edamato |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2008-05-19 12:16:33 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Simone Gotti
2008-03-19 10:17:25 UTC
Unfortunately, the '--mirrorsonly' parameter does not work that way. It means, "fail the operation if there are other volumes that are not mirrors". This would mean that if one site would fail, there would be no way to activate the volume on the next machine (because it would be unable to clean-up the VG). You may wish to open a new bug requesting better handling of this type of scenario. It may make you feel better to know that we are in the process of improving this. Unfortunately, I can't give you an ETA. Thanks for the answer. Yes, adding --mirrorsonly was only a (wrong) suggestion. :D I'm using HA LVM not to mirror 2 storages (I've got only 1 storage) but just to avoid the use of clvmd for various reasons: 1) I'm using an ext3 FS and I'd like to have the LVs active only on one node to minimize problems and avoid that a bad admin tries to mount the fs on more than 1 node at the same time. 2) clvmd is not able to handle different storage views (bug #246630 and others) (for example if a machine see less pv than another due to cabling problems or others type) From what I read on bugzilla and on the lvm-devel mailing list looks like "partial volume group" handling is targeted for 5.3 so this "issue" will be fixed with this new way of managing temporary lost PVs? If not, would you accept a patch to add an options to the HA lvm agent to let the user disable the call to vgreduce? |