Red Hat Bugzilla – Bug 1465929
New Kernels unbootable after yum update with kernel packages
Last modified: 2017-07-18 06:28:15 EDT
Description of problem:
When Updating a machine via yum update, the new installed kernels will be unbootable at random due to the initramfs not being written properly. This is a major issue since it renders systems completely unbootable after an update. To resolve this issue, one has to boot into an older kernel, then reinstall the kernel via yum reinstall kernel, then reboot.
Version-Release number of selected component (if applicable):
Out of 300+ Servers, 100 were affected. Mostly at random.
Steps to Reproduce:
1. Upgrade from kernel-3.10.0-514.10.2 to kernel-3.10.0-514.21.2 via yum
2. Reboot the system
System is unable to boot. Follwing message appears:
"There is a kernel panic - not syncing: VFS: Unable to mount root fs on unknown wm block(0,0)"
Server boots normally into newly installed kernel
This is also being discussed here: https://www.redhat.com/archives/spacewalk-list/2017-June/msg00043.html
I believe that such failures should be handled on a kernel package level, in particular postinstall scripts where the initramdisk is recreated and grub updated.
Is there a way to check if the initramfs is generated properly, after the installation? After reinstalling the kernels manually (yum reinstall kernel) the initramfs is generated properly and the new kernel boots. This "manual fix" has worked 100% of the time (tested on over 30 machines that had this problem, virtual and bare metal).
The question is: what is the difference between "yum update" and "yum reinstall kernel". Why does one break the system while the other works perfectly. There has to be a difference.