Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 58986 - boot failure: VMS: cannot open root device "806" or "08:06"
boot failure: VMS: cannot open root device "806" or "08:06"
Product: Red Hat Linux
Classification: Retired
Component: kudzu (Show other bugs)
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Bill Nottingham
Brian Brock
Depends On:
  Show dependency treegraph
Reported: 2002-01-28 17:33 EST by Dale Hanych
Modified: 2014-03-16 22:25 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-02-27 19:06:57 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Dale Hanych 2002-01-28 17:33:45 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.77 [en] (Win95; U)

Description of problem:
Dell 4400, Raid Controller 3/Di, delivered with Redhat 6.2 (worked great).
Attempted to upgrade to 7.1 (SeaWolf), Pro
Upgrade seemed to be fine, but all subsequent attempt to boot fail with kernel panic:

                            request_module[block-manager-8] Root fs not found
                            VFS cannot open root device "806" or "08:06"
                            please append a correct "root=" boot option

Am able to boot from the install disk in rescue mode.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Turn on the machine, let it try to boot.

Actual Results:  Boot process starts.
Raid controller starts.
Kernel is loaded and starts executing
execution ends with kernel panic and previous output

Expected Results:  Boot process completes.

Additional info:

Have attempted 2 1/2 months of fixes on the customer service (case 194547).  They've sent me here.

These are a summary of my responses to their requests:

e2label showed that the labels had been stripped from all of the partitions.  Reset all of the appropriate labels.

                            Same result on reboot attempt:

From rescue boot there is no /etc/fstab.  In fact /etc is almost empty, e.g.: 
                            group, protocols, services are all links to /mnt/runtime/etc.  All 
                            files therein have dates of <today>, or April 09  03:38.  Furthermore, the
                            entirety of / also appears to be sparcely populated.  Tracked down 
                            that the previous image is mounted at /mnt/sysimage, all of the filesystems
                            mounted there appear good (/mnt/sysimage/etc/fstab is 

After Booting into rescue mode....

                            The only thing in /var is the directory "state."  The earliest dmesg and messages files that I can find on the system (find / -name xxx) 
are in /mnt/sysimage/var/log, and are from Nov 04 (the last good boot prior to the upgrade).

                            Neither lilo.conf nor fstab files are in /etc (there are only 11 files in /etc).
                            I have executed 'lilo -C /mnt/sysimage/etc/lilo.conf', to confirm that the included version is the one in use; have 
                            also tried removing the 'linear' directive (as it is not in lilo.old).  Same kernal panic in both cases.

mkinitrd returns:   /mnt/sysimage/sbin/mkinitrd:  No such file or directory.

There are no initrd files for the new kernel (although they do exist for the earlier [6.2] version that came installed on the machine).
I was unable to get a man page for mkinitrd.   Perhaps if you could give me the parameters, we might be able to get that to run. 
When I tried it before, all of the partitions were still mounted at /mnt/sysimage -- since the fstab file references them all as mounted from /

boot into 7.1 rescue mode
chroot /mnt/sysimage
mount -a
/sbin/mkinitrd /boot/initrd-<version>.img

                             starts working, but returns with returnes:
                                   "no module percraid found for kernel  < kernel-version >

That's when customer service stopped responding.
Comment 1 Arjan van de Ven 2002-01-29 05:53:19 EST
Hmm... that's the OLD name for "aacraid"....
No wonder it doesn't work if it tries to load a non-existant driver.

Looks like you're almost there. Just before typing the mkinitrd in what you
described last, edit (with vi or whatever editor you like) the /etc/modules.conf
file and change "percraid" into "aacraid" and then generate the initrd as
described, or rather as

mkinitrd /boot/initrd-2.4.2-2.img 2.4.2-2
(eg repreate the version as second argument. If you use the SMP kernel add smp
etc etc)
Comment 2 Dale Hanych 2002-02-01 20:29:04 EST
Also had to update /etc/lilo.conf to add initrd lines to both kernals.  After that, everything was golden.

Thanks, nice to have the machine back.
Comment 3 Bill Nottingham 2002-02-02 21:27:37 EST
Can you post the output of 'lspci' on the system in question?
Comment 4 Bill Nottingham 2003-02-27 19:06:57 EST
closed, no response.

Note You need to log in before you can comment on or make changes to this bug.