Description of problem:
Updating a clean install of Fedora 25 workstation, with secure boot enabled all time, break the shim stuff.
I get a MOK screen, where i have tried all .efi files in the efi device.
I never get the system to boot at all, i get MOK screen and then BIOS.
dosfsck says .efi files is damaged, and i tried to let it fix it and rebooted, without luck.
In MOK screen, loading the grubx64.efi reported a "corrupted volume"
Version-Release number of selected component (if applicable):
I did a clean install, rebooted servel times, and then i did a
"sudo dnf update" and aprox. 1000 packages got installed, including a new kernel
Then i did a reboot, a power off-on, still getting the MOK -> BIOS screens.
I did it all 2 times, with exactly the same problem.
Installs went fine, but update broke it all.
3th time without secure boot, was without any problems.
Steps to Reproduce:
1. Install Fedora 25 with secure boot on
2. sudo dnf update
Alive and kicking computer
Using M.2 Intel Pro 6000p 128gb as boot drive.
motherboard - Asus Z170I PRO GAMING WIFI - Mini ITX
Am seeing same results with Intel NUC6i5SYK and Intel SSD600p 256gb M.2 PCIe as boot drive.
Steps to reproduce:
1. Install Fedora 25 from live USB stick.
2. Run the Software Updater and let it "Reboot and Install" all of the packages it deems in need of update.
3. Wait for the update to finish and reboot again automatically at end of update installation.
Always boots into MOK management after updates are installed. Selecting to continue the boot at any point after that (either directly or after trying to enroll keys/hash) returns message that no boot device can be found. This happens whether Secure Boot is enabled or disabled in the UEFI bios.
If I stop at step 1 and never run the software update, I can shut down and reboot as often as I like without this happening.
Getting the same results on a Dell Precision 5510 with Intel SSD600p as well. I follow the same steps to reproduce. I can successfully reboot and install software before running dnf update. After dnf update, boot to mok manger.
Getting the same results on a Acer Swift 3 (intel SSDPEKKW256G7).
After install all ok. After dnf update system goes to MokManager.
I seem to be running into the same issue.
System: Intel i5-7600 + ASRock H270ac/n motherboard + Intel 6-series NVME SSD.
Installed Fedora with default partition settings.
Original installation uses grub* packages with fc24 suffix (grub-efi, grub, grub-tools). These install and work properly.
Updating these packages to version 2.02-0.38.fc25 breaks the system.
The boot process reports corrupt \BOOT\EFI\fedora\grubx64.efi file.
Current workaround: running dnf upgrade -x 'grub*' to exclude these packages from the upgrade.
Had the same problem with Intel 600p nvme ssd, switched to Samsung 960 EVO and it worked fine.
In case of Intel SSD this ticket tracks a fs corruption issue caused by a bug in firmware: https://bugzilla.redhat.com/show_bug.cgi?id=1402533
Hit hard with this on my custom Ryzen 1700 build with Intel 600p 256GB NVMe, my EFI partition is corrupt and boot loading randomly crashes and misbehaves (not booting, black screens), this was hard to track down. Terrible experience, thumb down Intel. /CCed
I know that the new 121 FW for the 600p has resolved the following issue for me:
Have you all tried this new FW for this issue? Does the issue still occur? I have not seen this issue, so just wondering
Can everyone that is having trouble confirm the FW revision they're running?
You can do so by grabbing a copy of nvme-cli (it's in packages) or from https://github.com/linux-nvme/nvme-cli
and running (as root) `nvme list` and pasting the output here. I ask because I am currently testing fedora/debian installs on 121 and I can't repro. The issues everyone is having seems similar to bug 1402533 so I want to confirm we're all operating on the same firmware.
The original bug 1402533 still exists in the latest 121C firmware, my Centos VM's corrupted the filesystem within several days after applying 121C FW.
FWIW running ESXi6.0U3 on Intel 600P and all Centos VMs corrupt the superblock within a week. The Windows VMs work fine though.
Raised this issue on the Intel community forums several months ago then found the Redhat bug report. Can't help thinking this is the same issue?
What's your VM setup? Looking to try and repro the issue. Going to try and setup what you have and see if I can get the corruption.
Nothing special, Supermicro X10SDV-TLN4F with Supermicro ECC 64Gb RAM running VMWare ESXi 6.0 U3. This has a multiple HDDs attached: Intel NVMe 60P 512Gb, Crucial MX300 SSD, Western Digital Red 1Tb SATA HDD. I am using Centos 6 & 7 x64 OS VMs. In February I moved the Centos 7 x64 VMs onto the Intel 600P and they ran very well! But within 7 days this started corrupting them so they have been moved back to the Crucial SSD HDD and they have been running OK since then. Interestingly the sole Windows 10 VM continues to run OK on the Intel 600P.
FWIW - ESXi boots from the Intel 600P and runs well. FWIW I also installed Centos plus Docker in February and this died the same way. The VMs are configured with VMware paravirtual or LSI controller, makes no difference.
I think the 121C firmware is improved, the new Centos VM I made last week is still running. But the Centos VM running Graylog died within 4 days.
Note that you need to reboot the VMs to test if the filesystem is corrupt
Still happens with a Dell Vostro 3700
This message is a reminder that Fedora 25 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 25. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora 'version'
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 25 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
Thank you for reporting this bug and we are sorry it could not be fixed.
Hi, I got the same error upgrading from f25 to f27.
I have an Asus UX303U zeenbook. I'm stuck in the MOK blue screen.
I 've checked secure boot and it is disabled.
How to replicate: upgrade from f25 to f27, I hit innstall and reboot in the software updater and got here.
How can I fix the keys?
@Daniel, I had the same error due to the shim package either being autoremoved or not installed as part of the upgrade process. Obviously, your mileage may vary but to fix it, I booted a live CD, mounted the local disk partitions and installed the shim packages. Here's how I did that (you'll need to change device names, etc.):
- Download a live CD and boot on the affected machine.
- Open terminal
- fdisk -l and see what the main hard disk is called. In my computer's case, it is /dev/nvme0n1p*.
- Unlock the LUKS disk (if you have an encrypted disk): udisksctl unlock -b /dev/nvme0n1p3
- mount /dev/mapper/fedora-platypus-root /mnt
- mount /dev/nvme0n1p2 /mnt/boot
- mount /dev/nvme0n1p1 /mnt/boot/efi
- mount --bind /dev /mnt/dev
- mount --bind /proc /mnt/proc
- mount --bind /sys /mnt/sys
- mount -o bind /run /mnt/run
- chroot /mnt
- Reinstall the signed shim into /boot/efi: dnf reinstall grub2-efi-x64 grub2-efi-x64-modules shim
- chmod -x /etc/grub.d/30_os-prober
- grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
- You should have a functioning installation.
- https://forums.fedoraforum.org/showthread.php?p=1783554 (warning though, don’t mount at /efi, mount at /boot/efi - that was my fatal flaw)
(In reply to Ryan Hefner from comment #17)
> @Daniel, I had the same error due to the shim package either being
> autoremoved or not installed as part of the upgrade process. Obviously, your
> mileage may vary but to fix it, I booted a live CD, mounted the local disk
> partitions and installed the shim packages. Here's how I did that (you'll
> need to change device names, etc.):
> - Download a live CD and boot on the affected machine.
> - Open terminal
> - fdisk -l and see what the main hard disk is called. In my computer's case,
> it is /dev/nvme0n1p*.
> - Unlock the LUKS disk (if you have an encrypted disk): udisksctl unlock -b
> - mount /dev/mapper/fedora-platypus-root /mnt
> - mount /dev/nvme0n1p2 /mnt/boot
> - mount /dev/nvme0n1p1 /mnt/boot/efi
> - mount --bind /dev /mnt/dev
> - mount --bind /proc /mnt/proc
> - mount --bind /sys /mnt/sys
> - mount -o bind /run /mnt/run
> - chroot /mnt
> - Reinstall the signed shim into /boot/efi: dnf reinstall grub2-efi-x64
> grub2-efi-x64-modules shim
> - chmod -x /etc/grub.d/30_os-prober
> - grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
> - exit
> - reboot
> - You should have a functioning installation.
> - https://forums.fedoraforum.org/showthread.php?p=1783554 (warning though,
> don’t mount at /efi, mount at /boot/efi - that was my fatal flaw)
> - https://fedoraproject.org/wiki/GRUB_2
This worked for me, i was doing some cleanup, or what i thought was cleanup and ran 'dnf autoremove' and next reboot i had the MOK screen. Following these steps above, after changing it slightly for my HD setup, i was able to get back to a functional system. One note to add is during the 'reinstall' step, i ended up having to actually 'install' rather than reinstall which installed missing pieces (os-prober, a bunch of grub2 stuff, etc) needed to do the following steps.