Bug 122956
Summary: | software raid partitions do not boot properly | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Thomas Antony <thomas> | ||||||
Component: | kernel | Assignee: | Doug Ledford <dledford> | ||||||
Status: | CLOSED ERRATA | QA Contact: | David Lawrence <dkl> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 2 | CC: | davej, spam | ||||||
Target Milestone: | --- | ||||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2004-12-09 17:54:02 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Thomas Antony
2004-05-10 17:57:37 UTC
Created attachment 100138 [details]
Some output from /var/log/messages
This is more appropriately a bug in mkinitrd. In the initrd image, we aren't loading the raid1 module before initiating the raid start sequence. Since we are on the initrd, modprobe doesn't work, the raid1 personality is not yet registered, and therefore the raid arrays don't actually get started. Afterwards, we load the raid1 module, it tries again, this time it works. Reassigning to the person in charge of mkinitrd. I don't see anything obvious in mkinitrd that would be causing this to happen. Can you do `zcat /boot/initrd-$(uname -r).img > /tmp/initrd.nogz ; mount /tmp/initrd.nogz /mnt/floppy` and then attach the linuxrc from there? Created attachment 100260 [details]
the requested linuxrc
I installed FC2 i386 with LVM on a other machine. The same problems appears there. RAID1 is loaded (see the linuxrc) -- throwing back at Doug I have a two disk SW RAID1, with the following: /dev/md0 : /boot : 1 GB /dev/md1 : swap : 2GB /dev/md2 : / : 68GB After (or during) an update to a development snapshot, containing the 2.6.8-1.533smp kernel, the drives became out of sync. the "/boot" and "/" partitions on the /dev/sda device, all became out of sync with the partitions on the /dev/sdb device. The update resulted in the /dev/sda drive partitions not being updated, and the /dev/sdb drive partitons were updated. This was discovered on the reboot just after Yum updated the packages. This resulted in a situation where, when booting into grub, the system would attempt to start with an old, non-updated version of the "/boot" partition located on the /dev/sda device. Selecting the pre-updated kernel, since the new updated kernel was not an option, resulted in the machine attempting to boot. At that point, the /dev/sdb drive "/" partition of the RAID1, being /dev/md3, would start to boot. At which time the system would report that the /dev/md1, /dev/md2, and /dev/md3 RAID1 partitions were out of sync, and only the /dev/sdb drive partations would be used. Upon completion of the boot process, the "/boot" and "/" partitions were from the /dev/sdb drive, and were updated. Using mdadm to add the /dev/sda drives back into the system resulting in a succesful additions after re-sync. Problem was, now the MBR for grub was blown away on the /dev/sda drive, after the re-sync to the /dev/sdb drives. Resulting in a machine that was not bootable. Usage of a live Linux CD, and a reinstall of grub has allowed the machine to be rebootable, and the drive seems to be in sync once again. ok to close this ? things still ok with the 2.6.9 kernel ? kernel 2.6.9 solved it for me. |