Bug 590692 - LVM couldn't handle two or more bootable device
Summary: LVM couldn't handle two or more bootable device
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 12
Hardware: All
OS: Linux
low
high
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-10 13:55 UTC by Zoltan Hoppar
Modified: 2010-06-23 13:19 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-06-23 13:19:19 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Zoltan Hoppar 2010-05-10 13:55:56 UTC
Description of problem:

If you have 2 fedora installation, witch has both LVM and booth are bootable, then you could only start the first installed one only. 
This bug prevents from saving data from USB rack, or from any external attached device - or to run anyway the external complete system. This seems to be NOT machine or device dependent. Also this bug has been seen at F11, 12, and above.


Version-Release number of selected component (if applicable):
lvm2-2.02-53-2

How reproducible:
Always

Steps to Reproduce:
1. Have one normally IDE, or SATA HDD, installed inside the notebook. Have an Fedora installation on the disk.  
2. Create an another bootable live, or have an another complete fedora installation with default partitioning on SATA, IDE HDD, or SSD.
3. Restart the machine, and ask from the machine to boot (mostly with f12, or Esc) the complete external system.
  
Actual results:
The boot process jumps back from the external device. You could only boot your external system when you removing the another one witch mounted inside, and this is mostly impossible.

Expected results:
You could boot any kind of fedora with default install where is lvm used.

Additional info: The gnomefs automatically recognises and mounts external devices, but doesn't handles LVM groups.

Comment 1 Peter Rajnoha 2010-05-10 15:50:08 UTC
Do volume group names in these installations differ? If not, you need to rename one of them to make them unique (also changing the entry in boot loader's config accordingly).

Comment 2 Peter Rajnoha 2010-05-10 16:08:56 UTC
> (also changing the entry in boot loader's config accordingly).
...and entries in /etc/fstab as well.

Comment 3 Zoltan Hoppar 2010-05-11 14:15:25 UTC
I have did the changes, and doesn't work. I have met this problem too with F13 beta live, where I couldn't started the live installation, until the internal disk hasn't been removed. I choose the booting device in computer, starts the proper hw, and jumps back to the internal disk. Btw, the device always gets an unique ID, what consists from lot of numbers, however still can't boot.
Else if I choose extFS, then I *could* boot differently. With LVM *not*.

PS: When I try to show the newest release at booth, this is really blame us.

Comment 4 Milan Broz 2010-06-23 13:19:19 UTC
If volume groups name differs, there is no problem from lvm point of view to handle it (I have system with 3 VG, every VG containt roo LV for different system - rawhide,RHEL5, etc... and even using one grub on one boot partition - and it works).

If you have two VGs with the same name, but with differnet UUID, lvm allows you to rename (but system will not boot properly - on many places it expect unique VG name.)
Hopefully installer now doed not use one common name for all VGs (which implicitly caused this problem.)

So if you cannot install _new_ system with different vg_name, it is not lvm problem, but installer one - please report bug to anaconda then.


Note You need to log in before you can comment on or make changes to this bug.