From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.6) Gecko/20050324 Firefox/1.0.2 Red Hat/1.0.2-1.4.1.centos4 Description of problem: I had some grief with system that installs OK, can be accessed using rescue mode, but doesn't want to boot. At the end, it appers to be problem that disks not used during intallation (but present in the system) might have had some LVM metadata and file system labels on them that Anaconda hasn't detected (and/or has ignored). Usually, Anaconda will detect existing file system labels, and will create "uniqe" labels for new file systems (for example, /boot would be labeled as "/boot1" if some pre-existing file system already had "/boot" label). In this case, it didn't do it. One probable cause was that partition that had pre-existing "/boot" label did not cotain actual file system (it was LVM physical volume). I've included the relevant disk configuration information, and steps I needed to do to make machine bootable. BTW, it would save me some grief (in this case, and when moving disks from one system to another for example) if file system labels were random strings, instead of being something with high probability of not being uniqe (such as mount point name). Configuration: I2O RAID controller with two volumes. First RAID volume is used for the system. Second RAID volume is used for some data storage. Since kernel assigns them different device names during installation, and when the system is booted from the disk after installation, I'll call them "system RAID volume" and "data RAID volume". When I reference device names, it is just a reference as what name system saw them in particular step. During installation, i2o device drivers report the volumes in expected order. /dev/i2o/hda is the system RAID volume, /dev/i2o/hdb is the data RAID volume. Exactly the order they are defined in I2O BIOS. hdb is not touched by installation process and it contained single partition hdb1. /boot is installed on hda1 and "/boot" file system label written onto it. hda2 is configured as LVM physical volume with the rest of the system (including root partition). After the installation is done, and system reboots, for whatever strange reason data RAID volume is detected as /dev/i2o/hda, and system RAID volume as /dev/i2o/hdb. This should theoretically work fine since device names are never used as-is in system's configuration. However, the disks in data RAID volume were previously used (they were not clean), and since system detected them first, this was the root of the problem. It seems that those disks had (once apon a time) system on them, and set of LVM volumes defined, so that was used instead of the "real" information from first RAID volume. I'm not sure if disks were used connected to this I2O controller, or if they were used somewhere else and it just appeared that this information fell into the "right" spot when RAID volume was assembled. OK, so I wiped all partitions from data RAID volume. This time system actually boots (because it can see only partitions on system RAID volume that it detected as /dev/i2o/hdb, so it reads correct LVM information). But the story does not end here. I created single partition on data RAID volume (/dev/i2o/hda), defined it as LVM physical volume, and created new volume group with single logical volume on it. Created file system, mounted it, updated fstab. So far so good. Reboot. Ups, the system doesn't boot, and complains about duplicate "/boot" labels. Back into the rescue mode. And sure there it was. e2label reports that first partition on data RAID volume (which is of type LVM and contains LVM physical volume) and first partition on system RAID volume (which is of type Linux native and contains ext3 file system) both have label "/boot". Ooops. Apperently, Anaconda was smart enough to ignore the label on something that was not an file system. Whatever goes on during "real" boot wasn't that smart. Used e2label to wipe out the label from data RAID volume. This time system booted, no problems at all. For good measure I wiped out logical volume/group and physical volume from data RAID volume and recreated them (didn't wanted to risk e2label used on something that is not file system screw some metadata for LVM). All is happy now. It could have saved me tons of time and grief if Anaconda checked during install process (and detected) conflicting LVM information and conflicting file system labels. Or if file system labels were randomly generated (insted of using mount point names), like the labels usded by MD and LVM drivers. Hopefully this info will be usefull to somebody in the future. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. requires specific environment Additional info:
Fedora Core 3 is now maintained by the Fedora Legacy project for security updates only. If this problem is a security issue, please reopen and reassign to the Fedora Legacy product. If it is not a security issue and hasn't been resolved in the current FC5 updates or in the FC6 test release, reopen and change the version to match. Thank you!
requested by Jams Antill
Hm, what additional info is needed?
Can you still reproduce this bug in Fedora 7 or 8?
I don't have anything I could try it on at this time (and in foreseeable future), and can't assemble custom system needed to reproduce (I'd need to buy extra hardware to do it). Buying $500+ RAID controller plus disks just to try it out at home somehow doesn't fit in my home budget ;-)