Bug 427378

Summary: HA LVM should allow multiple LVs/VG as long as they move together
Product: Red Hat Enterprise Linux 5 Reporter: Jonathan Earl Brassow <jbrassow>
Component: rgmanagerAssignee: Jonathan Earl Brassow <jbrassow>
Status: CLOSED ERRATA QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: 5.2CC: cluster-maint
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Fixed In Version: RHBA-2008-0353 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-05-21 14:30:50 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On: 427377    
Bug Blocks:    

Description Jonathan Earl Brassow 2008-01-03 16:49:04 UTC
+++ This bug was initially created as a clone of Bug #427377 +++

The restriction that there can be only one LV per VG in HA LVM can be
cumbersome.  It was done that way to ensure different machines did not alter the
same metadata if LVs were in use in different places.

We can releave some of the pain by allowing multiple LVs per VG if, and only if,
they move together.  IOW, all services which rely on a particular volume group
must exist on only one machine at a time.

Comment 1 RHEL Product and Program Management 2008-01-03 16:54:48 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update

Comment 4 errata-xmlrpc 2008-05-21 14:30:50 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.