AArch64 guests can now use virtio-pci, and we're converting over to it from virtio-mmio. The RHELSA installation initrd in compose/.../pxeboot/ already has the virtio_pci module, so I can initiate an install of a virtio-pci guest, and succeed to install it. Strangely, even in that case though, I end up with an installed initrd that does not have virtio-pci (I expected dracut to pick up all currently loaded modules, but I guess it doesn't). Also, not so strangely, if I install a guest using virtio-mmio, but then convert it to use virtio-pci, the guest's initrd needs virtio-pci added. It would be better if we just always add the module unconditionally to the initrd, in order to make sure we can both reboot a virtio-pci guest after install, and also to convert guests.
Hmm, I just noticed that this bug is supposedly already ON_QA. However, as you can see from comment 4, installing a latest compose still has the problem. This bug has not been fixed.
I'm poking a bit at virtio-pci + aarch64. I don't think putting virtio-pci in dracut is the whole story though. The libvirt support for aarch64 + pci uses qemu's generic PCI express host hw. The qemu patches from January are here: https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00332.html In patch #3 Alex pointed out that linux kernel support for arm64 generic PCI still needed work: https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00331.html And it seems some of those changes he mentions only landed in kernel.git in August, and are queued for 4.3. For example: https://github.com/torvalds/linux/commit/aa4a5c0d2d7e3c30f9df033ea0367f148bb369f6 That patch isn't in rhelsa kernel, or fedora. So not sure if libvirt + aarch64 + pci ever worked for any RH distro. Though drjones I see your demo qemu command line at bug 1231719#c2 doesn't use an explicit PCI bridge on the command line, so maybe it's using some different code path, and that's why it works for the install case. Also FWIW, virtio_pci is in the initrd of Fedora 23 install media, and in the post-install initrd, even if using virtio-mmio. But it isn't enough to make libvirt + aarch64 + pci work.
(In reply to Cole Robinson from comment #10) > I'm poking a bit at virtio-pci + aarch64. I don't think putting virtio-pci > in dracut is the whole story though. > > The libvirt support for aarch64 + pci uses qemu's generic PCI express host > hw. The qemu patches from January are here: > > https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00332.html > > In patch #3 Alex pointed out that linux kernel support for arm64 generic PCI > still needed work: > > https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg00331.html We don't care about this, it's for DT boots. > > And it seems some of those changes he mentions only landed in kernel.git in > August, and are queued for 4.3. For example: > > https://github.com/torvalds/linux/commit/ > aa4a5c0d2d7e3c30f9df033ea0367f148bb369f6 > > That patch isn't in rhelsa kernel, or fedora. So not sure if libvirt + > aarch64 + pci ever worked for any RH distro. Though drjones I see your demo > qemu command line at bug 1231719#c2 doesn't use an explicit PCI bridge on > the command line, so maybe it's using some different code path, and that's > why it works for the install case. Yes, the ACPI code path. Works for install and normal boot/use case. > > Also FWIW, virtio_pci is in the initrd of Fedora 23 install media, and in > the post-install initrd, even if using virtio-mmio. But it isn't enough to > make libvirt + aarch64 + pci work. Fedora requires acpi=force added to the guest command line, RHELSA guests use ACPI by default. I'm pretty sure I've pointed this out in different places/BZs before; yes, see bug 1231727, for example. I didn't point it out here, because this bug is specific to RHELSA dracut not putting the virtio-pci module into the guest's initrd, which doesn't require booting with virtio-pci to be verified (lsinitrd is enough).
Please retry with dracut-033-358.el7
(In reply to Harald Hoyer from comment #12) > Please retry with dracut-033-358.el7 Hmm... Has it been changed so only the modules loaded during install are in the resulting initramfs? As I stated in the description, we should unconditionally add virtio-pci, allowing us to install with virtio-mmio and then convert guests to virtio-pci later. Here's my test results # rpm -qa | grep dracut dracut-network-033-358.el7.aarch64 dracut-config-rescue-033-358.el7.aarch64 dracut-033-358.el7.aarch64 # lsinitrd /boot/initramfs-4.2.0-0.21.el7.aarch64.img | grep virtio -rw-r--r-- 1 root root 62720 Oct 6 22:57 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/net/virtio_net.ko -rw-r--r-- 1 root root 32840 Oct 6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/scsi/virtio_scsi.ko drwxr-xr-x 2 root root 0 Oct 16 16:22 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio -rw-r--r-- 1 root root 21608 Oct 6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio/virtio.ko -rw-r--r-- 1 root root 21104 Oct 6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio/virtio_mmio.ko -rw-r--r-- 1 root root 28368 Oct 6 22:56 usr/lib/modules/4.2.0-0.21.el7.aarch64/kernel/drivers/virtio/virtio_ring.ko Also, we should also add [back] virtio-blk (I think that was there before). And, while we're at it, it'd be nice to add all the other virtio drivers that may be needed early on boot; virtio-gpu, virtio-input, virtio-console, virtio-rng Here's a list of all the modules we build # grep VIRTIO /boot/config-4.2.0-0.21.el7.aarch64 CONFIG_VIRTIO_BLK=m CONFIG_SCSI_VIRTIO=m CONFIG_VIRTIO_NET=m CONFIG_VIRTIO_CONSOLE=m CONFIG_HW_RANDOM_VIRTIO=m CONFIG_DRM_VIRTIO_GPU=m CONFIG_VIRTIO=m CONFIG_VIRTIO_PCI=m CONFIG_VIRTIO_PCI_LEGACY=y CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_INPUT=m CONFIG_VIRTIO_MMIO=m # CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set Thanks, drew
Looks like a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1278165
I've tested the systemd test package and it works. Once that package gets into a compose we can then try a guest install to make sure all the virtio drivers get added to the initrd.
Seems to be working with the latest nightly compose: # systemd-detect-virt kvm # uname -a Linux localhost.localdomain 4.5.0-0.35.el7.aarch64 #1 SMP Fri May 6 08:25:11 EDT 2016 aarch64 aarch64 aarch64 GNU/Linux # lsinitrd | grep virtio_pci -rw-r--r-- 1 root root 41350 May 6 15:18 usr/lib/modules/4.5.0-0.35.el7.aarch64/kernel/drivers/virtio/virtio_pci.ko
This should be a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1278165 but I would like if someone from QE could look at it.
Just trying to clean up old AArch64 bzs and found this. Since comment #26 suggests this a dupliate, can we close it since it has been waiting patiently for an answer for over a year? Thanks.
(In reply to John Feeney from comment #27) > Just trying to clean up old AArch64 bzs and found this. Since comment #26 > suggests this a dupliate, can we close it since it has been waiting > patiently for an answer for over a year? Thanks. Should be close-able. We switched to virtio-pci long ago and can do installations just fine, so it must be resolved already.
Thanks much.