Red Hat Bugzilla – Bug 189658
kernels 2080+2096 crash because of nv_sata (raid)
Last modified: 2015-01-04 17:26:43 EST
Description of problem:
When I update my kernel kernel_2.6.15-1_2054-x86_64 to 2.6.16-1_2080 or 2096 I
become a kernel panic at restart, because of my nv_sata (raid) and not mounting
/dev/ and /proc, etc.
The message say, that the kernel can't find the harddrives under this address
but when I boot 2054 kernel then it works fine.
Version-Release number of selected component (if applicable):
kernel 2080 and 2096
Steps to Reproduce:
A new kernel update has been released (Version: 2.6.18-1.2200.fc5)
based upon a new upstream kernel release.
Please retest against this new kernel, as a large number of patches
go into each upstream release, possibly including changes that
may address this problem.
This bug has been placed in NEEDINFO state.
Due to the large volume of inactive bugs in bugzilla, if this bug is
still in this state in two weeks time, it will be closed.
Should this bug still be relevant after this period, the reporter
can reopen the bug at any time. Any other users on the Cc: list
of this bug can request that the bug be reopened by adding a
comment to the bug.
In the last few updates, some users upgrading from FC4->FC5
have reported that installing a kernel update has left their
systems unbootable. If you have been affected by this problem
please check you only have one version of device-mapper & lvm2
installed. See bug 207474 for further details.
If this bug is a problem preventing you from installing the
release this version is filed against, please see bug 169613.
If this bug has been fixed, but you are now experiencing a different
problem, please file a separate bug for the new problem.
This bug has been mass-closed along with all other bugs that
have been in NEEDINFO state for several months.
Due to the large volume of inactive bugs in bugzilla, this
is the only method we have of cleaning out stale bug reports
where the reporter has disappeared.
If you can reproduce this bug after installing all the
current updates, please reopen this bug.
If you are not the reporter, you can add a comment requesting
it be reopened, and someone will get to it asap.