Red Hat Bugzilla – Bug 76365
installer aborts during upgrade with RAID error
Last modified: 2007-04-18 12:47:48 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2b) Gecko/20021017
Description of problem:
When I try to upgrade a &.3 system to 8.0, the installer stops and reboots the
system when it tries to mount the software raid device /dev/md0. This device
uses partitions /dev/hda1 and /dev/hdc1. The installer firstly complains about
/dev/hda1 and then that it cannot mount /dev/md0 and then it forces a reboot.
/dev/md0 (and /dev/md1) mount with no problems in 7.3
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Boot to 8.0 CDs
2. Select "upgrade existing system"
Actual Results: Installer aborts
Expected Results: Installer should mount /dev/md0 and continue
Are there are any errors listed on tty3 or tty4?
When the first installer error appears - "Error mounting file system on hda1:
Invalid argument" - the following appears on the console:
JBD: no valid journal superblock found
EXT3-fs: error loading journal
Later another error appears in the installer - "Error mounting device md0 as
/mnt/raid: Invalid argument. This most likely means this partition has not been
formatted". The console output is:
EXT3-fs: unable to read superblock
Created attachment 81304 [details]
output of tune2fs -l on /dev/md0 (under 7.3)
What does your /etc/fstab look like?
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
LABEL=/home /home ext3 defaults 1 2
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
LABEL=/tmp /tmp ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/usr/local /usr/local ext3 defaults 1 2
LABEL=/var /var ext3 defaults 1 2
/dev/hdg8 swap swap defaults 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
/dev/md0 /mnt/raid ext3 defaults,user 0 0
/dev/md1 /mnt/raid1 ext3 defaults,user 0 0
/dev/sda2 /mnt/firewire auto defaults,noauto,user
/dev/sda1 /mnt/firewire-win vfat defaults,noauto,user
/dev/sdb1 /mnt/fw160 ext3 defaults,noauto,user
/dev/hde1 /mnt/ibm-root auto defaults,noauto,user
/dev/hde2 /mnt/ibm-2 auto defaults,noauto,user
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,kudzu,ro 0 0
What do you get if you run 'raidstart /dev/md0' from tty2?
I get the following:
"could not find devices associated with raid device md0"
This happens even if I copy over raidtab from the existing / partition to the
Could you attach your raidtab?
The partitions comprising /dev/md1 are being used for something else at the moment.
# raid-level 1
# nr-raid-disks 2
# nr-spare-disks 0
# chunk-size 4
# persistent-superblock 1
# device /dev/hda2
# raid-disk 0
# device /dev/hdc2
# raid-disk 1
FYI: I had a similar problem while trying to upgrade a 7.2 system which appeared
to be fixed by running fdisk and modifying the partition type to fd. Some of my
RAID partitions were set at type 83.
Could you copy the /tmp/syslog and /tmp/anaconda.log from your system when it
fails to mount?
In the end I copied the data elsewhere and deleted the RAID device, so I can't
provide that information.
Hmm... I don't see anything obvious and haven't been able to reproduce it
myself, but we've improved the error code for our next release somewhat, so
hopefully if there's a problem in the future, we can get more information out of it.