Bug 1465929 - New Kernels unbootable after yum update with kernel packages
New Kernels unbootable after yum update with kernel packages
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: yum (Show other bugs)
7.3
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Valentina Mukhamedzhanova
BaseOS QE Security Team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-28 09:05 EDT by Wafa Sadri
Modified: 2017-07-18 06:28 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Wafa Sadri 2017-06-28 09:05:47 EDT
Description of problem:
When Updating a machine via yum update, the new installed kernels will be unbootable at random due to the initramfs not being written properly. This is a major issue since it renders systems completely unbootable after an update. To resolve this issue, one has to boot into an older kernel, then reinstall the kernel via yum reinstall kernel, then reboot. 

Version-Release number of selected component (if applicable):
RHEL 7.3

How reproducible:
Out of 300+ Servers, 100 were affected. Mostly at random.

Steps to Reproduce:
1. Upgrade from kernel-3.10.0-514.10.2 to kernel-3.10.0-514.21.2 via yum
2. Reboot the system


Actual results:
System is unable to boot. Follwing message appears:
"There is a kernel panic -  not syncing: VFS: Unable to mount root fs on unknown wm block(0,0)"

Expected results:
Server boots normally into newly installed kernel

Additional info:
This is also being discussed here: https://www.redhat.com/archives/spacewalk-list/2017-June/msg00043.html
Comment 2 Karel Srot 2017-07-18 03:30:51 EDT
I believe that such failures should be handled on a kernel package level, in particular postinstall scripts where the initramdisk is recreated and grub updated.
Comment 3 Wafa Sadri 2017-07-18 03:48:31 EDT
Is there a way to check if the initramfs is generated properly, after the installation? After reinstalling the kernels manually (yum reinstall kernel) the initramfs is generated properly and the new kernel boots. This "manual fix" has worked 100% of the time (tested on over 30 machines that had this problem, virtual and bare metal).

The question is: what is the difference between "yum update" and "yum reinstall kernel". Why does one break the system while the other works perfectly. There has to be a difference.

Note You need to log in before you can comment on or make changes to this bug.