Description of problem: Adding a second initrd image (i bootloader entries) is not being loaded by grub. How reproducible: Always Steps to Reproduce: 1.Create a custom initrd image e.g. second_initrd.img under /boot/ostree/rhcos-<latest ver>/. 2.Add the image as a second entry to initrd (in the last configuration version): cat /boot/loader/entries/ostree-2-rhcos.conf title Red Hat Enterprise Linux CoreOS 44.81.202001240931.0 (Ootpa) (ostree:0) version 2 options $ignition_firstboot rhcos.root=crypt_rootfs console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu rd.luks.options=discard ostree=/ostree/boot.0/rhcos/aecf8a67276be4387f633c813b55c45644098fe3a17498341e379ca45912e796/0 linux /ostree/rhcos-aecf8a67276be4387f633c813b55c45644098fe3a17498341e379ca45912e796/vmlinuz-4.18.0-147.3.1.el8_1.x86_64 initrd /ostree/rhcos-aecf8a67276be4387f633c813b55c45644098fe3a17498341e379ca45912e796/initramfs-4.18.0-147.3.1.el8_1.x86_64.img /ostree/rhcos-aecf8a67276be4387f633c813b55c45644098fe3a17498341e379ca45912e796/second_initrd.img 3.reboot Actual results: The second_initrd.img doesn't apper to be loaded by grub Expected results: The second_initrd.img loaded by grub (and additions made in the image take place) Additional info: This has worked before on Linux CoreOS 43.81.201911131833.0 Fails on Red Hat Enterprise Linux CoreOS 44.81.202001240931.0
We were using this method for early tuning of the node wrt low latency and realtime. As we currently do not have any workaround, I am marking this as high severity. We can lower that or even close the bug if an alternative method for executing an early tuning script exists.
Lowing severity to medium, Re evaluating if this is a bug.
This doesn't seem like something we want to support customers doing. I think the better approach would be to use `rpm-ostree initramfs` to regenerate the initrd and then reboot into that. This would be fine for a single node, but I wonder how best to scale that across multiple nodes in the cluster.
It might be nice to understand exactly what you were doing to do "early tuning of the node wrt low latency and realtime"? What files were you placing in the second initramfs? What were they doing?
We discussed this OOB a bit. The TL;DR is they want to be able to set up CPU affinity in the initrd. While the second initrd trick will work at first, it won't persist across upgrades. There is an RFE upstream to formalize this (https://github.com/coreos/rpm-ostree/issues/1930) since it's a use case that has come up in other contexts as well. > The second_initrd.img loaded by grub (and additions made in the image take place) Hmm, works for me on a fresh RHCOS I built locally: [root@coreos ~]# mkdir tmp [root@coreos tmp]# echo 'foobar' > foobar [root@coreos tmp]# find . | cpio -co > /boot/ostree/rhcos-907613502bf72c0ea8e30928ff9d6615c2b1bee3ab9aa5eb755611e6fdfe209b/foobar.img 1 block [root@coreos tmp]# vi /boot/loader/entries/ostree-1-rhcos.conf <add rd.break to options, and append the path to foobar.img on the initrd line> [root@coreos tmp]# cat /boot/loader/entries/ostree-1-rhcos.conf title Red Hat Enterprise Linux CoreOS 44.81.202002112137.0 (Ootpa) (ostree:0) version 1 options rhcos.root=crypt_rootfs console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu rd.luks.options=discard $ignition_firstboot ostree=/ostree/boot.1/rhcos/907613502bf72c0ea8e30928ff9d6615c2b1bee3ab9aa5eb755611e6fdfe209b/0 rd.break linux /ostree/rhcos-907613502bf72c0ea8e30928ff9d6615c2b1bee3ab9aa5eb755611e6fdfe209b/vmlinuz-4.18.0-147.el8.x86_64 initrd /ostree/rhcos-907613502bf72c0ea8e30928ff9d6615c2b1bee3ab9aa5eb755611e6fdfe209b/initramfs-4.18.0-147.el8.x86_64.img /ostree/rhcos-907613502bf72c0ea8e30928ff9d6615c2b1bee3ab9aa5eb755611e6fdfe209b/foobar.img <reboot> Press Enter for emergency shell or wait 5 minutes for reboot. switch_root:/# cat /foobar foobar
It seems that the root problem was in the second image itself. the image contained 3 files : etc/sysconfig/irqblanace , etc/systemd/system.conf and usr/lib/dracut/hooks/pre-udev/00-tuned-pre-udev.sh. It seems that loading 00-tuned-pre-udev.sh was not successful and failed the process. Closing this current BZ.