Bug 176436
Summary: | Anaconda recovery from failure caused by volume group name conflict | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Satoshi OSHIMA <soshima> | ||||||||
Component: | anaconda | Assignee: | Peter Jones <pjones> | ||||||||
Status: | CLOSED WONTFIX | QA Contact: | Mike McLean <mikem> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | high | ||||||||||
Version: | 5.0 | CC: | agk, coughlan, dwysocha, jlaska, mbroz, tao | ||||||||
Target Milestone: | --- | Keywords: | FutureFeature | ||||||||
Target Release: | --- | ||||||||||
Hardware: | ia64 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Enhancement | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2006-09-20 23:00:07 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 184165 | ||||||||||
Attachments: |
|
Description
Satoshi OSHIMA
2005-12-22 20:30:29 UTC
Created attachment 122535 [details]
/tmp/anacdump.txt
/tmp/anacdump.txt
Created attachment 122536 [details]
/tmp/lvmout
/tmp/lvmout
Can we reproduce this easily? If so, can you indicate what the partition table and LVM layout looks like prior to installing? # for all disks $ parted /dev/$DISK -s p # Also grab existing LVM details $ pvdisplay $ vgdisplay $ lvdisplay Created attachment 122847 [details]
information about existing partitions
attaching the information about existing partitions.
I pulled up this information using rescue cd.
I can see that Logical Volume is not consistent.
But RHEL4U2 worked fine on this server just before
installing RHEL4U3Beta.
Have you changed the disk configuration at all between installing RHEL4U2 and U3Beta? This looks like you've added a disk, or split up a hardware RAID1. I need to explain this server hardware. It has 2 RAID Controlers. Each Controler has 2 Logical Unit. RAID1-+-LU0(sda) | +-LU1(sdb) RAID2-+=LU0(sdc) | +-LU1(sdd) And I have never changed this configuration. But second RAID controller is sometimes removed from this server and sdc includes another copy of RHEL4U2 for *another server*. I have never changed partition setting nor LVM configuration neither. So the only chage is RAID controller #2 removal and reconnect. *note that OS that place in sda doesn't mount sdc nor sdd. What's happening here is that you're adding a second raid, and the LVM volume group on it has the same name as that on the first device. This won't work; you need to either remove or rename one of the volume groups before putting them both on the same system. Reopening bug for RHEL5 consideration. Changing summary to more accurately reflect the request. This looks a lot like another bug to resolve multiple volumes with the same name. Can't remember the bug number off hand but will try to dig it up. Per my comment #18, this is the bug from Hitachi (Issue Tracker wording is a little different which triggered comment #18) -- I can't find any other bugzillas with similar content. Removing BZ176436 from IT85287. This issue are addressed to BZ147361 for lvm2 and to BZ200252 for anaconda. This event sent from IssueTracker by kmori issue 85287 Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. Per private comment #22 above, Peter Jones couldn't fix this for RHEL4 and has had the same difficulty in trying to fix it on RHEL5: > Comment #9 From Peter Jones (pjones) on 2006-01-09 12:02 EST > > The way the lvm tools address the drives needs to change before we can > reasonably fix this bug. Until the underlying tools treat volume groups > with the same name but different UUIDs as different VGs, there's very > little we can do. Therefore Engineering has now declined the request for RHEL5 as well. |